HolyClaude
RepositoryFreeAI coding workstation: Claude Code + web UI + 7 AI CLIs + headless browser + 50+ tools
Capabilities13 decomposed
containerized claude code cli execution with oauth and api key authentication
Medium confidenceRuns the official Anthropic Claude Code CLI inside a Docker container with pre-configured OAuth flow support for Claude Max/Pro plans and direct API key authentication. The container bootstraps the Claude Code environment during startup via s6-overlay service supervision, handling credential injection through environment variables and persistent configuration files mounted at runtime. This eliminates manual CLI setup, dependency resolution, and authentication friction while maintaining full feature parity with the native CLI.
Bundles the official Claude Code CLI with pre-configured s6-overlay process supervision and OAuth bootstrap logic, handling credential injection and persistent state management automatically — most alternatives require manual CLI installation and authentication setup
Eliminates 30+ minutes of manual Claude Code setup, dependency installation, and authentication configuration compared to running the CLI natively or in a bare Docker image
web-based ui gateway for ai agent interaction on port 3001
Medium confidenceExposes a CloudCLI web interface running on port 3001 that provides HTTP/WebSocket access to the containerized AI agents (Claude Code and alternative CLIs). The web server is managed by s6-overlay as a supervised service with automatic restart on failure, and traffic is routed through the container's network stack. This enables browser-based interaction with AI agents without direct CLI access, supporting real-time streaming responses and multi-user concurrent sessions.
Integrates CloudCLI web UI with s6-overlay service supervision, providing automatic restart and graceful shutdown semantics for the web server — most containerized AI tools require manual service management or systemd integration
Provides browser-based access to Claude Code without requiring SSH tunneling or CLI expertise, reducing friction for non-technical team members compared to CLI-only alternatives
docker compose orchestration with pre-configured volume and network setup
Medium confidenceProvides a production-ready docker-compose.yaml template that orchestrates the HolyClaude container with pre-configured volume mounts (workspace, configuration), network exposure (port 3001 for web UI), shared memory allocation (shm_size: 2g for headless browser), and resource limits. The compose file includes environment variable references (.env file) for credentials and identity mapping (PUID/PGID), enabling users to deploy HolyClaude with a single docker-compose up command without manual configuration. The template handles common Docker pitfalls (shared memory exhaustion, permission mismatches, port conflicts) automatically.
Provides a pre-configured docker-compose.yaml that solves common Docker pitfalls (shared memory exhaustion, UID/GID mismatches, port conflicts) automatically — most containerized tools require users to manually tune these settings or provide incomplete examples
Reduces deployment time from 30+ minutes (manual Docker configuration) to 2-3 minutes (docker-compose up); eliminates common Docker configuration errors that cause silent failures or crashes
container bootstrap system with automatic service initialization
Medium confidenceImplements a multi-stage bootstrap system that runs at container startup to initialize services, validate configuration, set up user identity (UID/GID), and prepare the environment for AI agent execution. The bootstrap process uses shell scripts executed before s6-overlay starts supervised services, performing tasks like creating workspace directories, validating API keys, initializing Claude Code settings, and installing on-demand packages (Slim variant). This ensures the container reaches a ready state without manual post-startup configuration, enabling immediate use after docker-compose up.
Implements a multi-stage bootstrap system with automatic service initialization, configuration validation, and on-demand package installation — most containerized tools require manual post-startup configuration or provide minimal initialization logic
Eliminates manual post-startup configuration steps; enables fully-automated deployments in CI/CD pipelines without human intervention
codebase-aware code generation with workspace context injection
Medium confidenceEnables AI agents (Claude Code, alternative CLIs) to access the full workspace directory and inject codebase context into prompts, allowing models to generate code that is aware of existing project structure, dependencies, and coding patterns. The workspace is mounted as a Docker volume and accessible to all AI CLIs via a shared directory path. AI agents can read project files, analyze imports and dependencies, and generate code that integrates seamlessly with the existing codebase. This differs from stateless code generation by providing architectural context and reducing the need for manual context specification.
Provides seamless workspace mounting and context injection for AI agents without requiring explicit file selection or context management — most AI coding tools require manual file uploads or context specification
Enables architecture-aware code generation that respects project structure and dependencies; reduces context specification overhead compared to stateless AI tools that require manual file inclusion
multi-provider ai cli orchestration with 7 integrated agents
Medium confidenceBundles 7 distinct AI CLI tools (Claude Code, Gemini CLI, OpenAI Codex, Cursor, TaskMaster, Junie, OpenCode) into a single container with unified environment variable configuration and shared tool dependencies. Each CLI is pre-installed with its runtime dependencies and configured to use a common workspace directory. The container's bootstrap system detects which CLIs are enabled via environment variables and initializes only the necessary services, reducing startup time and memory overhead for users who only need a subset of providers.
Pre-installs 7 AI CLIs with unified workspace and environment variable configuration, using s6-overlay to selectively enable only configured providers at startup — most alternatives require separate installations and manual environment setup for each provider
Reduces setup time from hours (installing 7 separate tools) to minutes (single docker-compose up), and enables side-by-side provider comparison without environment conflicts
headless browser automation stack with chromium, xvfb, and playwright
Medium confidenceProvides a pre-configured headless browser environment combining Chromium, Xvfb (X11 virtual framebuffer), and Playwright for automated web interaction, screenshot capture, and testing. The container allocates shared memory (shm_size: 2g) to prevent Chromium crashes during concurrent browser operations, and Playwright is pre-installed with bindings for Node.js. The browser stack is managed by s6-overlay as a supervised service, enabling AI agents to programmatically navigate websites, extract data, and generate visual artifacts without requiring a display server.
Solves shared memory exhaustion for headless browsers by pre-allocating shm_size: 2g and using Xvfb for display virtualization, with s6-overlay service supervision for automatic browser restart — most containerized browser setups require manual shm tuning and lack automatic recovery
Eliminates Chromium crash debugging and shared memory troubleshooting that typically consumes hours in containerized browser deployments; pre-configured Playwright bindings enable immediate browser automation without dependency installation
persistent configuration and memory state management across container restarts
Medium confidenceImplements a volume-based persistence strategy using Docker named volumes and bind mounts to preserve Claude Code settings, AI CLI configurations, workspace files, and memory state across container lifecycle events. Configuration files (e.g., Claude settings, .env credentials) are mounted at container startup, and the bootstrap system initializes user identity (UID/GID) to match the host to prevent permission mismatches. SQLite databases used by AI CLIs are stored on local volumes rather than network-attached storage (NAS) to avoid locking issues, and a dedicated workspace directory persists generated code artifacts.
Solves UID/GID permission mismatches and SQLite locking issues specific to containerized AI workstations by implementing automatic identity mapping and enforcing local volume storage — most Docker setups ignore these issues, causing silent permission failures and database corruption
Eliminates hours of debugging permission errors and SQLite locking issues that plague naive containerized AI tool deployments; automatic UID/GID mapping ensures host-container file synchronization works out-of-the-box
process supervision and graceful shutdown with s6-overlay
Medium confidenceUses s6-overlay as the container's PID 1 process manager to supervise multiple services (Claude Code CLI, web UI, headless browser, notification daemon) with automatic restart on failure and coordinated graceful shutdown. Each service is defined as an s6 service directory with run scripts and optional finish scripts for cleanup. When the container receives a SIGTERM signal, s6 orchestrates shutdown of all supervised services in dependency order, preventing orphaned processes and ensuring data consistency. This replaces the typical single-process Docker pattern with a robust multi-service architecture.
Replaces Docker's default single-process model with s6-overlay's multi-service supervision, enabling automatic restart, coordinated shutdown, and service dependency management — most containerized AI tools run as single processes without recovery mechanisms
Provides production-grade service reliability (automatic restart, graceful shutdown) without requiring external orchestration tools like Kubernetes; reduces downtime from transient failures by 90%+ compared to single-process containers
notification system with apprise supporting 100+ delivery channels
Medium confidenceIntegrates Apprise (a unified notification library) to send alerts and status updates to 100+ services including Discord, Telegram, Slack, email, webhooks, and custom endpoints. The notify.py script is invoked by AI agents or container services to dispatch notifications with configurable templates and delivery channels. Apprise abstracts provider-specific API differences, allowing a single notification call to fan out to multiple channels simultaneously. Configuration is managed via environment variables (APPRISE_URLS) or configuration files, enabling flexible notification routing without code changes.
Abstracts 100+ notification providers behind a unified Apprise interface with environment variable configuration, enabling multi-channel fan-out without code changes — most AI tools require provider-specific integrations or lack notification support entirely
Reduces notification integration effort from hours (implementing Discord, Slack, email separately) to minutes (configuring APPRISE_URLS); supports 100+ providers vs. typical 3-5 in custom implementations
ollama integration for local and cloud-hosted language models
Medium confidenceEnables integration with Ollama (a local LLM runtime) to run open-source models (Llama 2, Mistral, etc.) locally or connect to cloud-hosted Ollama instances. The container can be configured to use Ollama as a fallback provider when Anthropic/OpenAI APIs are unavailable or rate-limited, or as a primary provider for privacy-sensitive workloads. Configuration is managed via OLLAMA_BASE_URL environment variable, and AI CLIs are configured to route requests to the Ollama endpoint. This enables cost-effective, privacy-preserving AI coding without external API dependencies.
Provides seamless Ollama integration via environment variable configuration, enabling fallback to local models without code changes — most AI tools require separate Ollama client libraries or custom provider implementations
Eliminates API costs and external dependencies for privacy-sensitive workloads; local model execution reduces latency from 500-2000ms (cloud APIs) to 100-500ms (local GPU) at the cost of lower code quality
image variant selection (full vs. slim) with on-demand package installation
Medium confidenceDistributes HolyClaude in two Docker image variants: Full (~3GB, includes all 7 AI CLIs and 50+ dev tools pre-installed) and Slim (~1.5GB, core tools only with on-demand installation). Users select the variant via docker-compose image tag (latest for Full, slim for Slim). The Slim variant uses a package manager (apt, npm, pip) to install additional tools at container startup if requested via environment variables, reducing initial image size and download time while maintaining flexibility. This enables users to optimize for startup speed (Full) or image size (Slim) based on their deployment constraints.
Provides two image variants with different optimization targets (size vs. startup time) and on-demand installation for Slim, enabling users to choose based on deployment constraints — most containerized tools offer only a single image or require manual tool installation
Slim variant reduces image size by 50% and download time from 10 minutes to 3-5 minutes; Full variant eliminates startup delays from on-demand installation, providing flexibility for different deployment scenarios
environment variable-driven configuration with .env file support
Medium confidenceImplements configuration management via environment variables and .env files, enabling users to customize Claude Code settings, AI provider credentials, notification endpoints, and container behavior without modifying Dockerfile or docker-compose.yaml. The bootstrap system reads .env files at container startup and injects variables into supervised services. Common variables include ANTHROPIC_API_KEY, OLLAMA_BASE_URL, APPRISE_URLS, PUID/PGID for identity mapping, and feature flags for enabling/disabling specific services. This enables reproducible, version-controlled configuration across deployments.
Provides .env file support with bootstrap-time variable injection into s6-overlay services, enabling configuration without Dockerfile modification — most containerized tools require environment variables to be passed at docker run time or hardcoded in Dockerfile
Enables version-controlled, reproducible configuration across environments; reduces configuration drift compared to manual environment variable passing or Dockerfile modifications
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with HolyClaude, ranked by overlap. Discovered automatically through the match graph.
claude-code-guide
Claude Code Guide - Setup, Commands, workflows, agents, skills & tips-n-tricks go from beginner to power user!
commander
Commander, your AI coding commander centre for all you ai coding cli agents
Claude Code
Anthropic's agentic coding tool that lives in your terminal and helps you turn ideas into code.
docker-mcp
A docker MCP Server (modelcontextprotocol)
pinme
Deploy Your Frontend in a Single Command. Claude Code Skills supported.
shennian
Shennian — AI Agent Mobile Console CLI
Best For
- ✓DevOps engineers deploying AI coding agents in containerized infrastructure
- ✓Teams running Claude Code in CI/CD pipelines or headless environments
- ✓Developers who want reproducible Claude Code environments across machines
- ✓Teams deploying HolyClaude in shared environments or on remote servers
- ✓Non-technical stakeholders who need browser-based access to AI coding agents
- ✓DevOps teams exposing AI workstations through reverse proxies or load balancers
- ✓Teams new to Docker who want a working HolyClaude deployment without Docker expertise
- ✓DevOps engineers using docker-compose for local development or small-scale deployments
Known Limitations
- ⚠Requires Docker 20.10.0+ and 2GB minimum RAM allocation
- ⚠OAuth flow requires interactive browser access during first authentication
- ⚠API key must be injected via environment variables or mounted .env files — no interactive prompts in headless mode
- ⚠Container architecture limited to amd64 and arm64; no 32-bit support
- ⚠Port 3001 must be exposed and accessible from client machines
- ⚠No built-in authentication or authorization layer — relies on network isolation or reverse proxy auth
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 10, 2026
About
AI coding workstation: Claude Code + web UI + 7 AI CLIs + headless browser + 50+ tools
Categories
Alternatives to HolyClaude
Are you the builder of HolyClaude?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →