dual-mode http client with automatic retry logic and configurable backends
Provides both synchronous (Together) and asynchronous (AsyncTogether) HTTP clients built on httpx with configurable exponential backoff retry strategies for transient failures. The architecture uses a base client pattern (_BaseClient) that abstracts HTTP operations, allowing runtime selection between httpx (default) and aiohttp backends for async workloads. Automatic retry logic with configurable max retries and backoff multipliers handles network transience without developer intervention.
Unique: Implements a three-tier architecture (_BaseClient → Together/AsyncTogether) with pluggable HTTP backends and configurable retry strategies, allowing developers to swap httpx for aiohttp at runtime without changing application code. The _resources_proxy pattern enables lazy-loading of API resource modules.
vs alternatives: More flexible than OpenAI's Python SDK because it exposes both sync/async clients with swappable HTTP backends, whereas OpenAI locks you into httpx for sync and aiohttp for async.
server-sent events (sse) streaming with token-level granularity
Implements real-time token streaming via Server-Sent Events (SSE) for both synchronous and asynchronous clients by setting stream=True on API calls. The streaming layer (_streaming.py) parses SSE-formatted responses and yields individual tokens or completion chunks as they arrive from the server, enabling low-latency token consumption for chat and text generation endpoints. Supports both line-by-line iteration (sync) and async iteration patterns.
Unique: Abstracts SSE parsing into a dedicated _streaming.py module that handles both sync and async iteration patterns uniformly, exposing a simple iterator interface that yields CompletionChunk objects without requiring developers to parse raw SSE format.
vs alternatives: Cleaner streaming API than raw httpx SSE handling because it automatically parses SSE frames and yields typed CompletionChunk objects; similar to OpenAI SDK but with explicit async support via AsyncTogether.
batch processing for asynchronous bulk inference
Implements the batch resource for processing large numbers of requests asynchronously in a single batch job. Developers submit a JSONL file containing multiple API requests, and the batch API processes them in parallel, returning results in a JSONL output file. Batch processing is significantly cheaper than real-time API calls but introduces latency (typically hours). The API provides job status monitoring and result retrieval.
Unique: Provides batch processing as a first-class resource with JSONL-based input/output, allowing developers to submit bulk requests without managing individual API calls. Batch jobs are asynchronous and can be monitored via status polling.
vs alternatives: More cost-effective than real-time API calls for large-scale inference; similar to OpenAI's batch API but with support for more endpoint types (images, audio, etc.).
file management with upload, download, and validation
Implements the files resource for managing data files used in fine-tuning, batch processing, and other workflows. The API provides file.upload (with format validation), file.retrieve (download), file.list (enumerate), and file.delete operations. Files are stored on Together's servers and referenced by file_id in downstream operations. The API validates file format (JSONL for training data) and provides storage quotas.
Unique: Integrates file management directly into the SDK, allowing developers to upload and manage training data without separate file storage infrastructure. Files are referenced by file_id in downstream operations (fine-tuning, batch processing).
vs alternatives: Simpler than managing files separately because file upload/download is integrated into the SDK; similar to OpenAI's files API but with support for more file types and use cases.
model listing and metadata retrieval
Implements the models resource for discovering available models and retrieving their metadata (context window, pricing, capabilities, etc.). The API provides models.list() to enumerate all available models and models.retrieve(model_id) to get detailed information about a specific model. Model metadata includes supported features (chat, completions, embeddings, etc.), pricing, and availability status.
Unique: Exposes model metadata as a queryable resource, allowing developers to programmatically discover and compare models without hardcoding model names. Metadata includes capabilities, pricing, and context window information.
vs alternatives: More discoverable than OpenAI's API because it exposes model metadata and capabilities; enables dynamic model selection based on requirements.
cli tools for file, model, fine-tuning, and cluster management
Provides command-line interface (CLI) tools for managing files, models, fine-tuning jobs, and clusters without writing Python code. The CLI mirrors the SDK API surface, exposing commands like 'together files upload', 'together fine-tuning create', 'together models list', etc. CLI tools are useful for scripting, automation, and interactive exploration of the Together API.
Unique: Provides a complete CLI interface that mirrors the Python SDK, allowing developers to use Together API from shell scripts and CI/CD pipelines without writing Python code. CLI tools support file upload, fine-tuning job management, and model discovery.
vs alternatives: More complete than curl-based API access because it abstracts HTTP details and provides structured output; similar to OpenAI's CLI but with more features (fine-tuning, endpoints, etc.).
error handling with typed exceptions and retry guidance
Implements a comprehensive error handling system with typed exception classes (APIError, AuthenticationError, RateLimitError, etc.) that provide context about failures. The SDK automatically retries transient errors (5xx, timeouts) with exponential backoff, but raises typed exceptions for application-level errors (4xx, auth failures). Error objects include request_id for debugging and suggestions for recovery.
Unique: Provides typed exception classes for different error categories (auth, rate limit, server error, etc.), enabling developers to implement error-specific handling logic. Automatic retry logic with exponential backoff handles transient failures transparently.
vs alternatives: More granular error handling than raw httpx exceptions because it provides typed exception classes and automatic retry logic; similar to OpenAI SDK but with more detailed error context.
async/await support with asynctogether client and event loop integration
Provides a fully asynchronous client (AsyncTogether) that mirrors the synchronous Together client but uses async/await syntax and integrates with Python's asyncio event loop. All API resources are available on the async client with identical signatures. The async client uses aiohttp (optional) or httpx for HTTP operations, enabling high-concurrency workloads without blocking threads.
Unique: Provides a fully async-compatible client (AsyncTogether) with identical API surface to the sync client, enabling developers to use the same code patterns in both sync and async contexts. Supports both httpx and aiohttp backends for HTTP operations.
vs alternatives: More flexible than OpenAI SDK because it exposes both sync and async clients with swappable HTTP backends; enables true async/await patterns without callback-based APIs.
+8 more capabilities