open-terminal
AgentFreeA computer you can curl ⚡
Capabilities12 decomposed
background-command-execution-with-streaming-output
Medium confidenceExecutes shell commands asynchronously via POST /execute endpoint and streams output to JSONL log files, tracking process state in an in-memory registry. Uses FastAPI background tasks to decouple command submission from execution, enabling agents to poll status or stream results without blocking. Each BackgroundProcess instance maintains PID, original command, ProcessRunner reference, and async log task that captures stdout/stderr separately or merged.
Decouples command submission from execution using FastAPI background tasks with separate stdout/stderr capture to JSONL files, enabling agents to submit fire-and-forget commands while maintaining full output auditability without blocking the HTTP response
Lighter-weight than container-per-command approaches (Docker Exec) and more flexible than simple subprocess.run() because it provides non-blocking execution, streaming output, and process state tracking via HTTP polling
interactive-pty-terminal-sessions-over-websocket
Medium confidenceCreates and manages interactive pseudo-terminal (PTY) sessions via WebSocket at /api/terminals/* endpoints, enabling real-time bidirectional communication between agents and shell environments. Each terminal session maintains its own process state, environment variables, and working directory. Uses WebSocket handlers to forward stdin/stdout/stderr in real-time, supporting interactive tools like editors, REPLs, and shell prompts that require immediate feedback.
Implements full PTY emulation over WebSocket with separate stdin/stdout/stderr channels, enabling agents to interact with interactive shell tools that require immediate feedback and terminal control sequences, rather than just fire-and-forget command execution
More interactive than REST-based polling (background-command-execution) and more lightweight than SSH tunneling because it uses native WebSocket for bidirectional communication without requiring SSH keys or port forwarding
multi-user-mode-with-user-isolation
Medium confidenceSupports multi-user deployments via X-User-Id header that scopes all operations (file access, process execution, terminal sessions) to individual users. Each user gets isolated filesystem views, separate background process registries, and independent terminal sessions. User isolation is enforced at the FastAPI dependency layer (get_filesystem() dependency) and propagated through all subsystems (ProcessRunner, TerminalSession, NotebookSession).
Implements comprehensive user isolation at the application layer via FastAPI dependency injection, scoping all operations (files, processes, terminals, notebooks) to individual users based on X-User-Id header without requiring OS-level containerization
Simpler to deploy than per-user containers because it uses logical isolation, but weaker than OS-level isolation and requires careful implementation to prevent isolation escapes
health-check-and-service-readiness-probing
Medium confidenceExposes GET /health endpoint that returns service health status and readiness information, enabling load balancers and orchestration systems to detect when Open Terminal is ready to accept requests. Health check is lightweight and does not require authentication, making it suitable for frequent polling by infrastructure monitoring systems.
Provides a lightweight, unauthenticated /health endpoint suitable for frequent polling by load balancers and orchestration systems, enabling infrastructure-level health monitoring without requiring API keys
Simpler than full observability solutions because it provides a single endpoint, but less detailed than Prometheus metrics because it only returns binary health status
user-isolated-filesystem-abstraction-with-userfs
Medium confidenceProvides multi-user file system isolation via UserFS abstraction layer that scopes all file operations to a user-specific directory based on X-User-Id header. Implemented as a dependency injection in FastAPI (get_filesystem() dependency), it intercepts all file reads/writes and enforces path normalization to prevent directory traversal attacks. Each user sees a sandboxed view of the filesystem rooted at their user directory.
Implements filesystem isolation via FastAPI dependency injection with UserFS abstraction that normalizes and scopes all file paths to user directories, preventing directory traversal without requiring OS-level containerization or separate processes
Simpler to deploy than per-user containers or chroot jails because it uses logical isolation at the application layer, but weaker than OS-level isolation and requires careful path validation to prevent escapes
file-system-operations-with-archive-support
Medium confidenceExposes comprehensive file operations via /files/* REST endpoints including read, write, list, delete, and archive (tar/zip) operations. Implements atomic writes with temporary files to prevent corruption, supports streaming large file downloads, and provides directory listing with metadata (size, modification time, permissions). Archive operations support both creation and extraction with configurable compression formats.
Combines atomic file writes (using temporary files), streaming downloads, and archive operations (tar/zip) in a single REST API with UserFS isolation, enabling agents to safely manipulate files without direct filesystem access while supporting bulk operations
More comprehensive than simple file read/write APIs because it includes archive support and atomic writes, but slower than direct filesystem access because all operations go through HTTP and path normalization
jupyter-notebook-execution-with-cell-isolation
Medium confidenceExecutes Jupyter notebooks via /notebooks/* endpoints with per-cell execution tracking and output capture. Maintains notebook session state across multiple cell executions, enabling agents to run data analysis workflows. Each cell execution is tracked separately with input/output/error metadata, and the kernel state persists across requests, allowing subsequent cells to reference variables from previous cells.
Provides stateful Jupyter kernel execution via REST API with per-cell tracking and output capture, enabling agents to run multi-step data analysis workflows where later cells can reference variables from earlier cells, all without requiring direct Jupyter server access
More stateful than subprocess-based Python execution because it maintains kernel state across requests, but less flexible than full Jupyter Lab because it lacks interactive UI and notebook editing capabilities
port-detection-and-http-proxying
Medium confidenceDetects open ports on the host via /ports endpoint and provides HTTP proxying via /proxy/* to forward requests to services running on those ports. Enables agents to discover and interact with services (web servers, APIs, databases) running locally without direct network access. Proxying handles request/response forwarding with header manipulation and connection pooling.
Combines port detection (via netstat/ss) with HTTP proxying to enable agents to discover and interact with local services without direct network access, handling request/response forwarding with connection pooling and header manipulation
More discoverable than hardcoded port configurations because it dynamically detects open ports, but less secure than explicit service registration because any open port is accessible to agents
api-key-authentication-with-constant-time-comparison
Medium confidenceImplements Bearer token authentication via verify_api_key() FastAPI dependency that validates API keys using constant-time comparison (hmac.compare_digest()) to prevent timing attacks. All endpoints except /health and /system require valid API key in Authorization header. Authentication is enforced at the dependency injection layer, making it transparent to endpoint handlers.
Uses constant-time comparison (hmac.compare_digest()) for API key validation to prevent timing attacks, implemented as a FastAPI dependency that transparently enforces authentication across all protected endpoints
Simpler than OAuth2/JWT but less flexible because it uses a single shared key; more secure than naive string comparison because constant-time comparison prevents attackers from inferring key characters via timing analysis
feature-discovery-via-config-endpoint
Medium confidenceExposes GET /api/config endpoint that returns feature flags and capability metadata, enabling clients to discover which features are enabled (e.g., notebook execution, multi-user mode, MCP server). Returns JSON with boolean flags for each feature, allowing agents to conditionally use capabilities based on server configuration without hardcoding assumptions.
Provides a dedicated /api/config endpoint that returns feature flags and capability metadata, enabling clients to discover enabled features without trial-and-error or hardcoding assumptions about server configuration
More explicit than inferring capabilities from error responses because it provides upfront feature discovery, but less detailed than OpenAPI/GraphQL introspection because it only returns boolean flags
llm-system-prompt-generation
Medium confidenceExposes GET /system endpoint that returns a system prompt describing Open Terminal's capabilities, API endpoints, and usage patterns. Designed for LLM context injection, the prompt includes endpoint descriptions, authentication requirements, and examples of common workflows. Enables LLMs to understand how to use Open Terminal without requiring external documentation.
Generates a machine-readable system prompt describing Open Terminal's API and capabilities, enabling LLMs to understand how to use the service without external documentation or manual prompt engineering
More convenient than external documentation because the prompt is served dynamically, but less detailed than full OpenAPI specs because it's designed for LLM readability rather than machine parsing
mcp-server-integration-for-tool-calling
Medium confidenceImplements a Model Context Protocol (MCP) server at open_terminal/mcp_server.py that exposes Open Terminal capabilities as MCP tools, enabling Claude and other MCP-compatible LLMs to call Open Terminal functions directly. The MCP server wraps REST endpoints as tool definitions with JSON schemas, allowing LLMs to invoke commands, file operations, and terminal sessions through the MCP protocol without manual HTTP calls.
Implements a full MCP server that wraps Open Terminal REST endpoints as MCP tools with JSON schemas, enabling Claude and other MCP-compatible LLMs to invoke shell commands, file operations, and terminal sessions through the standardized MCP protocol
More standardized than custom HTTP integration because it uses the MCP protocol, enabling compatibility with multiple LLM providers; more seamless than manual prompt engineering because tools are automatically available to the LLM
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with open-terminal, ranked by overlap. Discovered automatically through the match graph.
PiloTY
** - AI pilot for PTY operations that enables agents to control interactive terminals with stateful sessions, SSH connections, and background process management
DesktopCommanderMCP
This is MCP server for Claude that gives it terminal control, file system search and diff file editing capabilities
gemini-cli
An open-source AI agent that brings the power of Gemini directly into your terminal.
E2B
Cloud sandboxes for AI agents — secure code execution, file system access, custom environments.
BondAI
Code interpreter with CLI & RESTful/WebSocket API
E2B
Open-source, secure environment with real-world tools for enterprise-grade agents.
Best For
- ✓AI agents orchestrating multi-step workflows with shell commands
- ✓automation platforms requiring non-blocking command execution
- ✓teams building agentic systems that need real-time process monitoring
- ✓agents requiring interactive shell workflows with immediate feedback
- ✓tools integrating with REPL environments (Python, Node.js, Ruby)
- ✓systems needing stateful shell sessions with environment persistence
- ✓SaaS platforms hosting AI agents for multiple customers
- ✓shared infrastructure requiring strong user isolation
Known Limitations
- ⚠In-memory process registry is not persisted — processes lost on service restart
- ⚠No built-in process timeout or resource limits — runaway commands can consume unbounded CPU/memory
- ⚠JSONL log files stored locally — requires external log aggregation for distributed deployments
- ⚠No native support for process groups or signal forwarding beyond SIGTERM
- ⚠WebSocket connections are stateful and not horizontally scalable without session affinity
- ⚠No built-in terminal multiplexing (tmux/screen) — each session is isolated
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 17, 2026
About
A computer you can curl ⚡
Categories
Alternatives to open-terminal
Are you the builder of open-terminal?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →