multi-provider llm backend abstraction with unified interface
AIAC implements a Backend interface abstraction layer that enables seamless switching between OpenAI, AWS Bedrock, and Ollama LLM providers through a single unified API. Each backend implementation satisfies the same interface contract, allowing the core library to invoke LLM calls interchangeably without provider-specific branching logic. This design pattern decouples the code generation logic from provider-specific API details, enabling new backends to be added by implementing the interface without modifying existing code.
Unique: Uses Go interface-based polymorphism to create a provider-agnostic abstraction where OpenAI, Bedrock, and Ollama backends implement identical method signatures, enabling runtime backend selection without conditional logic in the generation pipeline
vs alternatives: More flexible than monolithic LLM wrappers because it enforces backend interchangeability at the type system level rather than through configuration alone, preventing provider-specific code from leaking into generation logic
toml-based multi-backend configuration with environment variable override
AIAC provides a hierarchical configuration system using TOML files stored in XDG_CONFIG_HOME (~/.config/aiac/aiac.toml by default) that defines multiple named backends, each with provider type, credentials, and model defaults. The system supports environment variable overrides for sensitive credentials, allowing users to define backends in configuration while injecting secrets at runtime. Configuration loading follows a precedence chain: CLI flags > environment variables > TOML file defaults, enabling flexible deployment across local development, CI/CD, and containerized environments.
Unique: Implements a three-tier precedence system (CLI flags > env vars > TOML file) that allows secure credential injection via environment variables while maintaining readable configuration files, with support for custom config paths via --config flag
vs alternatives: More flexible than environment-variable-only configuration because it allows defining multiple backends in a single file while still supporting secret injection, and more secure than embedding credentials in TOML because it encourages environment-based secrets
artifact-type-aware prompt engineering with domain-specific system messages
AIAC constructs domain-specific system prompts based on the target artifact type (Terraform, Dockerfile, Kubernetes, GitHub Actions, OPA, Bash, Python, SQL), guiding the LLM to generate syntactically correct and idiomatic code for each domain. Rather than using a generic system prompt, the tool embeds artifact-type-specific instructions that emphasize best practices, common patterns, and syntax requirements for each format. This enables a single LLM model to generate high-quality code across heterogeneous infrastructure domains without requiring separate specialized models.
Unique: Implements artifact-type-aware system prompts where each artifact type (Terraform, Dockerfile, Kubernetes, etc.) has a specialized system message that embeds domain-specific best practices and syntax requirements, enabling a single LLM to generate idiomatic code across heterogeneous infrastructure domains
vs alternatives: More effective than generic prompts because artifact-specific guidance reduces hallucination and syntax errors, and more maintainable than separate specialized models because all generation flows through a single prompt engineering layer that can be updated centrally
aws bedrock backend integration with cross-region model access
AIAC integrates with AWS Bedrock by implementing the Backend interface for Bedrock's managed LLM service. The backend handles AWS authentication via IAM credentials, request formatting for Bedrock's API, and response parsing. Users can access multiple LLM providers (Anthropic Claude, Cohere, etc.) through Bedrock's unified API. This enables organizations with existing AWS infrastructure to leverage Bedrock without managing separate API accounts.
Unique: Integrates with AWS Bedrock to provide access to multiple LLM providers (Claude, Cohere, etc.) through a managed AWS service, enabling organizations with existing AWS infrastructure to use AIAC without external API accounts
vs alternatives: Better integrated with AWS environments than direct API access, and provides access to multiple LLM providers through a single managed service compared to managing separate API accounts
ollama local llm backend for privacy-preserving code generation
AIAC integrates with Ollama, an open-source tool for running LLMs locally. The Ollama backend implementation communicates with a local Ollama instance via HTTP API, enabling code generation without sending prompts to external services. Users can run open-source models (Llama 2, Mistral, etc.) locally, providing complete data privacy and no API costs. This backend is ideal for organizations with strict data governance requirements or offline environments.
Unique: Integrates with Ollama to enable local LLM-based code generation without external API calls, providing complete data privacy and zero API costs by running open-source models on local hardware
vs alternatives: Provides complete data privacy compared to cloud-based backends, and eliminates API costs; however, generated code quality is typically lower than GPT-4 or Claude models
natural language to infrastructure-as-code generation with llm prompting
AIAC accepts free-form natural language prompts describing infrastructure requirements and sends them to configured LLM backends with system prompts optimized for IaC generation. The system constructs prompts that guide the LLM to generate specific artifact types (Terraform, CloudFormation, Pulumi, Dockerfile, Kubernetes manifests, GitHub Actions, OPA policies, Bash/Python scripts, SQL queries). The LLM response is streamed back to the user and optionally formatted or saved to files, enabling rapid prototyping of infrastructure code without manual template writing.
Unique: Implements artifact-type-aware prompting where the system constructs different system prompts for Terraform vs Dockerfile vs Kubernetes manifests, enabling the same LLM to generate syntactically correct code across heterogeneous infrastructure domains without requiring separate models
vs alternatives: More versatile than domain-specific generators because it uses a single LLM backend to generate multiple artifact types (IaC, configs, scripts, policies) through prompt engineering, whereas specialized tools require separate integrations for each artifact type
interactive code generation with refinement and export options
After generating initial code, AIAC enters an interactive mode where users can refine, regenerate, or export the output. The CLI presents options to regenerate with the same prompt, modify the prompt and regenerate, save output to a file, copy to clipboard, or exit. This interactive loop enables iterative refinement of generated code without re-invoking the CLI, reducing context switching and allowing users to converge on acceptable output through multiple LLM invocations within a single session.
Unique: Implements a stateful interactive loop within a single CLI invocation that allows prompt modification and regeneration without losing context, using a menu-driven interface to guide users through refinement options
vs alternatives: More efficient than invoking the CLI repeatedly because it maintains the LLM connection and context across multiple generations, reducing latency and allowing users to explore variations without re-parsing configuration or re-authenticating
openai backend with streaming response handling
AIAC implements a dedicated OpenAI backend that communicates with OpenAI's API using the official Go SDK, supporting both GPT-3.5-turbo and GPT-4 models. The backend handles streaming responses, allowing real-time display of generated code as it's produced by the LLM rather than waiting for complete generation. It manages API authentication via OPENAI_API_KEY environment variable or configuration file, constructs system and user messages for IaC generation, and handles rate limiting and error responses from OpenAI's API.
Unique: Implements streaming response handling using OpenAI's streaming API, allowing real-time display of generated code character-by-character as the LLM produces output, rather than buffering the entire response before display
vs alternatives: Provides better user experience than non-streaming backends because users see code generation in progress, reducing perceived latency and enabling early termination if output is clearly incorrect
+5 more capabilities