composable validation pipeline with multi-strategy failure handling
Orchestrates a chain of validators through the Guard class that execute sequentially against LLM outputs, with each validator implementing a validate() method and specifying OnFailAction strategies (exception, reask, fix, filter, noop, refrain). The framework automatically routes validation failures to appropriate handlers—reask re-prompts the LLM with context about the failure, fix applies corrective transformations, filter removes invalid content, and exception halts execution. This enables declarative composition of validation logic without imperative error handling.
Unique: Uses a declarative OnFailAction enum (exception, reask, fix, filter, noop, refrain) bound to individual validators rather than global error handlers, enabling fine-grained control over remediation strategy per validation rule. The reask mechanism integrates directly with the Guard's LLM interaction loop, automatically constructing corrective prompts with validation context.
vs alternatives: More flexible than simple output validation (e.g., Pydantic validators) because it can automatically retry LLM generation with corrective prompts rather than just rejecting invalid outputs; more structured than ad-hoc try-catch patterns because failure strategies are declarative and composable.
schema-driven structured output generation with rail, pydantic, and json schema
Converts unstructured LLM outputs into validated, typed data structures by accepting schema definitions in three formats: RAIL (Guardrails' XML-based specification language), Pydantic models, or JSON Schema. The framework maintains a type registry that maps schema definitions to Python types, automatically generating validators for type constraints and field requirements. When the LLM output is parsed, it's coerced into the target schema with validation applied at parse time, ensuring type safety and structural correctness without manual deserialization code.
Unique: Maintains a unified type registry that bridges RAIL, Pydantic, and JSON Schema formats, allowing schema definitions to be swapped at runtime without code changes. The framework automatically generates validators from schema constraints (required fields, type annotations, regex patterns) and applies them during parsing, eliminating the need for separate validation logic.
vs alternatives: More comprehensive than Pydantic alone because it adds re-prompting and fix strategies when schema validation fails; more flexible than OpenAI function calling because it supports multiple schema formats and can layer additional custom validators on top of structural validation.
guardrails server deployment with rest api and remote validation
Provides a standalone server mode (guardrails server) that exposes Guards as REST API endpoints, enabling remote validation without embedding Guardrails in the application. The server handles authentication, request routing, and response serialization. Clients can invoke validation by sending HTTP requests to the server, which executes the Guard and returns validation results. This enables centralized validation infrastructure shared across multiple applications.
Unique: Provides a standalone server mode that exposes Guards as REST API endpoints, enabling remote validation without embedding Guardrails in the application. The server abstracts away Guard instantiation and management, allowing clients to invoke validation via simple HTTP requests.
vs alternatives: More scalable than embedded validation because the server can be scaled independently; more centralized than distributed validation because all validation logic is in one place.
cli tools for validator management and guard configuration
Provides command-line tools for managing validators (install, update, remove), configuring authentication, and deploying the Guardrails server. The CLI supports commands like `guardrails hub install`, `guardrails hub list`, `guardrails configure`, and `guardrails server start`. Configuration is stored in a credentials file that can be shared across projects. The CLI enables non-developers to manage validators and configure Guardrails without writing code.
Unique: Provides a comprehensive CLI that abstracts validator installation, authentication configuration, and server deployment, enabling non-developers to manage Guardrails without writing code. Configuration is centralized in a credentials file that can be shared across projects.
vs alternatives: More user-friendly than manual Python code because CLI commands are simple and discoverable; more portable than hardcoded configuration because credentials are stored in a centralized file.
pydantic model integration with automatic validator generation
Integrates with Pydantic models by automatically generating validators from Pydantic field definitions (type annotations, constraints, validators). When a Guard is instantiated from a Pydantic model, the framework extracts field metadata and creates validators for type checking, required fields, and custom Pydantic validators. LLM outputs are parsed into Pydantic model instances with validation applied automatically, ensuring type safety and constraint compliance.
Unique: Automatically extracts validators from Pydantic field definitions (type annotations, constraints, custom validators) and applies them to LLM outputs without requiring explicit validator registration. This enables seamless integration with existing Pydantic-based codebases.
vs alternatives: More convenient than manual validator definition because validators are automatically generated from Pydantic models; more type-safe than unvalidated JSON parsing because Pydantic ensures type correctness.
json schema and openai function calling integration
Integrates with JSON Schema and OpenAI's function calling API by accepting JSON Schema definitions and automatically converting them to OpenAI function schemas. The framework can invoke OpenAI's function calling mode with the schema, ensuring the LLM generates structured output that matches the schema. Validation is applied to the function call result, and re-asking is supported if validation fails.
Unique: Integrates with OpenAI's native function calling API by converting JSON Schema to OpenAI function schemas and validating the resulting function calls. This enables leveraging OpenAI's structured output capabilities while adding Guardrails' validation and re-asking logic.
vs alternatives: More efficient than text-based parsing because OpenAI function calling guarantees structured output; more flexible than raw function calling because Guardrails adds validation and re-asking on top.
hub-based validator ecosystem with registry and dependency management
Provides a centralized marketplace (Guardrails Hub) of pre-built validators for common use cases (PII detection, toxicity, bias, hallucination, regex matching, etc.) that can be installed via CLI commands like `guardrails hub install hub://guardrails/regex_match`. The framework maintains a validator registry that maps validator names to implementations, supports versioning and dependency resolution, and allows validators to be imported declaratively in RAIL specifications or programmatically via @register_validator decorators. Custom validators can be published back to the Hub, creating a community-driven ecosystem.
Unique: Implements a decentralized validator registry where validators are identified by URIs (hub://guardrails/validator_name) and can be installed, versioned, and updated independently. The framework supports both Hub-hosted validators and locally-registered custom validators through a unified import mechanism, enabling seamless composition of community and proprietary validation logic.
vs alternatives: More modular than monolithic validation libraries because validators are independently versioned and installable; more discoverable than custom validation code because the Hub provides a searchable marketplace with documentation and examples.
synchronous and asynchronous execution with streaming validation support
Supports four execution patterns through Guard and AsyncGuard classes: synchronous blocking (Guard.__call__()), asynchronous non-blocking (AsyncGuard.__call__()), synchronous streaming (Guard.__call__(stream=True)), and asynchronous streaming (AsyncGuard.__call__(stream=True)). Streaming validation processes LLM output tokens incrementally, applying validators to partial outputs and enabling early rejection or correction before the full response is generated. This architecture allows the same Guard definition to be used across different execution contexts without code duplication.
Unique: Provides a unified Guard API that abstracts over four execution modes (sync, async, sync-streaming, async-streaming) through method overloads and class variants, allowing the same validation logic to be deployed in different runtime contexts. Streaming validation integrates with the re-asking mechanism to enable mid-stream correction without waiting for full LLM output.
vs alternatives: More flexible than single-mode validators because the same Guard works in sync, async, and streaming contexts; more efficient than post-hoc validation because streaming mode can detect and correct problems before the full response is generated.
+6 more capabilities