Capability
Litellm Integration For Transparent Scanner Injection Into Llm Calls
9 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
Open-source LLM input/output security scanner toolkit.
Unique: Integrates with LiteLLM proxy layer enabling transparent scanner injection without application code changes; supports configuration-driven per-model/provider scanning policies; works with all LiteLLM-compatible providers (OpenAI, Anthropic, Ollama, Azure, etc.) in unified framework
vs others: More transparent than manual scanner calls because it integrates at LiteLLM middleware layer; more flexible than provider-specific security solutions because it works across all LiteLLM providers; enables security-by-default without requiring developers to remember to call scanners