containerized claude code cli execution with oauth and api key authentication
Runs the official Anthropic Claude Code CLI inside a Docker container with pre-configured OAuth flow support for Claude Max/Pro plans and direct API key authentication. The container bootstraps the Claude Code environment during startup via s6-overlay service supervision, handling credential injection through environment variables and persistent configuration files mounted at runtime. This eliminates manual CLI setup, dependency resolution, and authentication friction while maintaining full feature parity with the native CLI.
Unique: Bundles the official Claude Code CLI with pre-configured s6-overlay process supervision and OAuth bootstrap logic, handling credential injection and persistent state management automatically — most alternatives require manual CLI installation and authentication setup
vs alternatives: Eliminates 30+ minutes of manual Claude Code setup, dependency installation, and authentication configuration compared to running the CLI natively or in a bare Docker image
web-based ui gateway for ai agent interaction on port 3001
Exposes a CloudCLI web interface running on port 3001 that provides HTTP/WebSocket access to the containerized AI agents (Claude Code and alternative CLIs). The web server is managed by s6-overlay as a supervised service with automatic restart on failure, and traffic is routed through the container's network stack. This enables browser-based interaction with AI agents without direct CLI access, supporting real-time streaming responses and multi-user concurrent sessions.
Unique: Integrates CloudCLI web UI with s6-overlay service supervision, providing automatic restart and graceful shutdown semantics for the web server — most containerized AI tools require manual service management or systemd integration
vs alternatives: Provides browser-based access to Claude Code without requiring SSH tunneling or CLI expertise, reducing friction for non-technical team members compared to CLI-only alternatives
docker compose orchestration with pre-configured volume and network setup
Provides a production-ready docker-compose.yaml template that orchestrates the HolyClaude container with pre-configured volume mounts (workspace, configuration), network exposure (port 3001 for web UI), shared memory allocation (shm_size: 2g for headless browser), and resource limits. The compose file includes environment variable references (.env file) for credentials and identity mapping (PUID/PGID), enabling users to deploy HolyClaude with a single docker-compose up command without manual configuration. The template handles common Docker pitfalls (shared memory exhaustion, permission mismatches, port conflicts) automatically.
Unique: Provides a pre-configured docker-compose.yaml that solves common Docker pitfalls (shared memory exhaustion, UID/GID mismatches, port conflicts) automatically — most containerized tools require users to manually tune these settings or provide incomplete examples
vs alternatives: Reduces deployment time from 30+ minutes (manual Docker configuration) to 2-3 minutes (docker-compose up); eliminates common Docker configuration errors that cause silent failures or crashes
container bootstrap system with automatic service initialization
Implements a multi-stage bootstrap system that runs at container startup to initialize services, validate configuration, set up user identity (UID/GID), and prepare the environment for AI agent execution. The bootstrap process uses shell scripts executed before s6-overlay starts supervised services, performing tasks like creating workspace directories, validating API keys, initializing Claude Code settings, and installing on-demand packages (Slim variant). This ensures the container reaches a ready state without manual post-startup configuration, enabling immediate use after docker-compose up.
Unique: Implements a multi-stage bootstrap system with automatic service initialization, configuration validation, and on-demand package installation — most containerized tools require manual post-startup configuration or provide minimal initialization logic
vs alternatives: Eliminates manual post-startup configuration steps; enables fully-automated deployments in CI/CD pipelines without human intervention
codebase-aware code generation with workspace context injection
Enables AI agents (Claude Code, alternative CLIs) to access the full workspace directory and inject codebase context into prompts, allowing models to generate code that is aware of existing project structure, dependencies, and coding patterns. The workspace is mounted as a Docker volume and accessible to all AI CLIs via a shared directory path. AI agents can read project files, analyze imports and dependencies, and generate code that integrates seamlessly with the existing codebase. This differs from stateless code generation by providing architectural context and reducing the need for manual context specification.
Unique: Provides seamless workspace mounting and context injection for AI agents without requiring explicit file selection or context management — most AI coding tools require manual file uploads or context specification
vs alternatives: Enables architecture-aware code generation that respects project structure and dependencies; reduces context specification overhead compared to stateless AI tools that require manual file inclusion
multi-provider ai cli orchestration with 7 integrated agents
Bundles 7 distinct AI CLI tools (Claude Code, Gemini CLI, OpenAI Codex, Cursor, TaskMaster, Junie, OpenCode) into a single container with unified environment variable configuration and shared tool dependencies. Each CLI is pre-installed with its runtime dependencies and configured to use a common workspace directory. The container's bootstrap system detects which CLIs are enabled via environment variables and initializes only the necessary services, reducing startup time and memory overhead for users who only need a subset of providers.
Unique: Pre-installs 7 AI CLIs with unified workspace and environment variable configuration, using s6-overlay to selectively enable only configured providers at startup — most alternatives require separate installations and manual environment setup for each provider
vs alternatives: Reduces setup time from hours (installing 7 separate tools) to minutes (single docker-compose up), and enables side-by-side provider comparison without environment conflicts
headless browser automation stack with chromium, xvfb, and playwright
Provides a pre-configured headless browser environment combining Chromium, Xvfb (X11 virtual framebuffer), and Playwright for automated web interaction, screenshot capture, and testing. The container allocates shared memory (shm_size: 2g) to prevent Chromium crashes during concurrent browser operations, and Playwright is pre-installed with bindings for Node.js. The browser stack is managed by s6-overlay as a supervised service, enabling AI agents to programmatically navigate websites, extract data, and generate visual artifacts without requiring a display server.
Unique: Solves shared memory exhaustion for headless browsers by pre-allocating shm_size: 2g and using Xvfb for display virtualization, with s6-overlay service supervision for automatic browser restart — most containerized browser setups require manual shm tuning and lack automatic recovery
vs alternatives: Eliminates Chromium crash debugging and shared memory troubleshooting that typically consumes hours in containerized browser deployments; pre-configured Playwright bindings enable immediate browser automation without dependency installation
persistent configuration and memory state management across container restarts
Implements a volume-based persistence strategy using Docker named volumes and bind mounts to preserve Claude Code settings, AI CLI configurations, workspace files, and memory state across container lifecycle events. Configuration files (e.g., Claude settings, .env credentials) are mounted at container startup, and the bootstrap system initializes user identity (UID/GID) to match the host to prevent permission mismatches. SQLite databases used by AI CLIs are stored on local volumes rather than network-attached storage (NAS) to avoid locking issues, and a dedicated workspace directory persists generated code artifacts.
Unique: Solves UID/GID permission mismatches and SQLite locking issues specific to containerized AI workstations by implementing automatic identity mapping and enforcing local volume storage — most Docker setups ignore these issues, causing silent permission failures and database corruption
vs alternatives: Eliminates hours of debugging permission errors and SQLite locking issues that plague naive containerized AI tool deployments; automatic UID/GID mapping ensures host-container file synchronization works out-of-the-box
+5 more capabilities