via “intelligent-request-routing-with-load-balancing”
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
Unique: Implements multi-dimensional routing with simultaneous consideration of cost, latency, and availability using a weighted scoring system, combined with per-deployment cooldown tracking to prevent thundering herd failures during provider outages
vs others: More sophisticated than simple round-robin; tracks real-time health and cooldown state per deployment, enabling intelligent failover without manual intervention unlike static load balancers