stitch-skills
MCP ServerFreeA library of Agent Skills designed to work with the Stitch MCP server. Each skill follows the Agent Skills open standard, for compatibility with coding agents such as Antigravity, Gemini CLI, Claude Code, Cursor.
Capabilities12 decomposed
agent-agnostic skill installation and discovery
Medium confidenceAutomatically detects active AI coding agents (Antigravity, Gemini CLI, Claude Code, Cursor) on the developer's system and installs standardized skills into agent-specific directories without manual configuration. Uses a skills CLI that scans the filesystem for agent installation paths and deploys skills following the Agent Skills open standard directory structure, enabling write-once-run-anywhere skill distribution across heterogeneous agent platforms.
Implements agent-agnostic skill distribution via automatic filesystem detection and standardized directory structure, eliminating the need for agent-specific skill versions or manual configuration per agent. The skills CLI acts as a universal installer that maps the Agent Skills open standard structure to each agent's expected skill location.
Unlike agent-specific skill marketplaces (e.g., Copilot Extensions for VS Code only), Stitch Skills works across Cursor, Claude Code, Gemini CLI, and Antigravity with a single installation, reducing maintenance burden for skill developers and enabling seamless agent switching for users.
standardized skill instruction and execution framework
Medium confidenceProvides a structured directory convention (SKILL.md, scripts/, resources/, examples/) that enables AI agents to consistently discover task instructions, validate outputs, and learn from reference implementations. Each skill follows the Agent Skills open standard, allowing agents to parse SKILL.md for mission/workflow/success criteria, execute validation scripts for quality enforcement, and reference example outputs for in-context learning without agent-specific adaptation.
Encodes skill semantics in a standardized directory structure (SKILL.md + scripts + resources + examples) that agents can parse and execute without custom integration, treating skills as self-contained, agent-agnostic modules. This contrasts with function-calling APIs that require schema definitions per provider.
More portable than OpenAI/Anthropic function-calling schemas (which are provider-specific) and more discoverable than unstructured GitHub repositories because the standard structure enables agents to automatically locate instructions, validation logic, and examples without documentation parsing.
reference implementation learning and in-context examples
Medium confidenceProvides syntactically valid reference implementations in the examples/ directory of each skill, enabling agents to learn expected output formats, coding patterns, and best practices through concrete examples. Agents can reference these examples during code generation to understand the desired output structure, style, and quality level, improving generation accuracy through in-context learning without requiring explicit instruction in SKILL.md.
Treats reference implementations as a first-class skill component (examples/ directory) that agents can reference during generation, enabling in-context learning without explicit instruction. This approach leverages agents' ability to learn from examples rather than relying solely on textual instructions.
More effective than textual instructions alone because agents can learn patterns from concrete code, and more maintainable than hardcoded generation logic because examples can be updated independently of skill logic.
design system resource documentation and guidelines
Medium confidenceProvides structured reference materials, checklists, style guides, and API documentation in the resources/ directory of each skill, enabling agents to access design system guidelines, component specifications, and best practices during code generation. Resources serve as a knowledge base that agents can query to understand design system constraints, component APIs, styling conventions, and accessibility requirements, improving generation accuracy and consistency.
Organizes design system knowledge in a structured resources/ directory that agents can reference during code generation, treating design system documentation as a queryable knowledge base rather than static documentation. This approach enables agents to make informed decisions about component selection, styling, and accessibility without explicit instruction.
More accessible than external design system documentation because resources are co-located with skill logic, and more actionable than unstructured documentation because resources are organized by type (checklists, style guides, API docs).
design-to-react component code generation with prompt optimization
Medium confidenceTransforms UI design data from the Stitch MCP Server into production-ready React components by first optimizing design prompts via the enhance-prompt skill, then generating component code via the react-components skill. The pipeline extracts design semantics (layout, styling, interactivity) from design files and synthesizes React/TypeScript code with proper component structure, prop interfaces, and styling integration, guided by optimized prompts that clarify design intent for the code generation model.
Chains the enhance-prompt skill (which optimizes design descriptions for code generation) with the react-components skill (which synthesizes React code), creating a two-stage pipeline that improves code quality by clarifying design intent before generation. This contrasts with single-stage design-to-code tools that generate code directly from design metadata without semantic optimization.
More semantically aware than regex-based design-to-code converters because it uses LLM-based prompt optimization to extract and clarify design intent, and more flexible than template-based generators because it synthesizes code rather than filling templates.
multi-page website generation from design specifications
Medium confidenceGenerates complete multi-page websites (HTML, CSS, JavaScript) from design specifications via the stitch-loop skill, which orchestrates iterative design-to-code transformation across multiple pages. The skill manages page-level decomposition, component reuse across pages, styling consistency, and navigation structure, producing a cohesive website codebase with shared component libraries and unified design system application.
Implements iterative design-to-code transformation via the stitch-loop skill, which decomposes multi-page websites into page-level tasks, manages component reuse across pages, and enforces styling consistency through a unified design system application. This orchestration approach enables scaling from single-page to multi-page generation without exponential complexity.
More sophisticated than single-page design-to-code tools because it manages cross-page consistency and component reuse, and more maintainable than manually-coded websites because styling and components are generated from a single design source.
shadcn/ui component library integration and guidance
Medium confidenceProvides structured guidance for integrating shadcn/ui components into generated code via the shadcn-ui skill, which includes a component catalog, customization patterns, migration guides, and best practices. The skill enables agents to select appropriate shadcn/ui components for design specifications, apply customization patterns (theming, variant composition), and generate code that leverages the shadcn/ui library instead of building components from scratch, reducing code generation complexity and improving consistency with a widely-used component library.
Encodes shadcn/ui component semantics, customization patterns, and best practices in a structured skill that agents can reference during code generation, enabling intelligent component selection and customization without requiring agents to parse shadcn/ui documentation. The skill includes a component catalog, customization guide, and migration guide as structured resources.
More integrated than generic component library documentation because it's specifically designed for agent-driven code generation and includes customization patterns and migration guides, and more maintainable than hardcoding component logic because customization is externalized to the skill resources.
design system documentation generation from specifications
Medium confidenceGenerates comprehensive design system documentation (design-md skill) from design specifications in the Stitch MCP Server, producing markdown files that document design tokens, component definitions, usage patterns, and accessibility guidelines. The skill extracts semantic design information (colors, typography, spacing, components) from design metadata and synthesizes human-readable documentation that serves as a reference for developers and designers, enabling design-to-documentation transformation alongside design-to-code.
Transforms design metadata from Stitch MCP Server into structured markdown documentation via the design-md skill, enabling design-to-documentation generation alongside design-to-code. This approach treats documentation as a first-class output of the design system, not an afterthought, and keeps documentation synchronized with design specifications.
More maintainable than manually-written design system documentation because it's generated from a single source of truth (design specifications), and more comprehensive than design tool exports because it synthesizes semantic documentation rather than exporting raw design data.
video walkthrough generation for component usage and design patterns
Medium confidenceGenerates video walkthroughs of components and design patterns via the remotion skill, which synthesizes video content that demonstrates component usage, design system patterns, and interaction flows. The skill uses Remotion (a React-based video generation framework) to programmatically create videos from design specifications and component code, producing shareable video documentation that complements static documentation and code examples.
Leverages Remotion (React-based video generation framework) to programmatically synthesize video walkthroughs from component code and design specifications, enabling automated video documentation generation without manual video production. This approach treats video as a generated artifact, not a manually-created asset.
More scalable than manually-recorded video tutorials because videos are generated programmatically from code, and more maintainable than static video files because video content can be regenerated when components or design specifications change.
prompt enhancement for improved code generation quality
Medium confidenceOptimizes design prompts and specifications via the enhance-prompt skill to improve downstream code generation quality. The skill analyzes design descriptions, clarifies ambiguous specifications, adds missing context, and structures prompts to maximize code generation model comprehension. This preprocessing step transforms vague or incomplete design specifications into precise, well-structured prompts that guide code generation models toward higher-quality outputs, reducing the need for manual refinement.
Implements prompt optimization as a discrete, reusable skill that preprocesses design specifications before code generation, treating prompt quality as a first-class concern. This approach separates prompt engineering from code generation, enabling independent optimization and reuse across multiple code generation tasks.
More systematic than ad-hoc prompt engineering because it's a structured skill with defined inputs/outputs, and more effective than single-stage code generation because it optimizes prompts before code generation, improving downstream model comprehension.
external system integration and workflow orchestration
Medium confidenceEnables skills to integrate with external systems (APIs, databases, design tools) and orchestrate complex workflows via the standardized scripts/ directory in each skill. Skills can define executable programs (bash, Node.js, Python) that perform network operations, API calls, data transformations, and system integrations, allowing skills to interact with external tools like Figma, GitHub, deployment platforms, and custom backends. The skill framework provides a standard interface for agents to invoke these integration scripts without knowledge of implementation details.
Provides a standardized scripts/ directory interface for skills to integrate with external systems without requiring agent-specific integration code. Skills define executable programs that agents invoke as black boxes, enabling flexible integration with any external system that has an API or CLI interface.
More flexible than hardcoded integrations because scripts can be any executable program, and more portable than agent-specific plugins because scripts follow a standard interface that agents can invoke uniformly.
quality validation and automated output checking
Medium confidenceEnforces output quality through executable validation scripts in the scripts/ directory of each skill, enabling agents to automatically verify generated code, documentation, and other artifacts against success criteria defined in SKILL.md. Validation scripts perform syntax checking, semantic validation, style enforcement, and correctness verification, providing agents with automated feedback on output quality and enabling iterative refinement without manual review.
Embeds validation logic in executable scripts within each skill, enabling agents to automatically verify outputs against success criteria without external review. This approach treats validation as a first-class skill capability, not an afterthought, and enables iterative refinement loops where agents can improve outputs based on validation feedback.
More integrated than external linting tools because validation is part of the skill definition, and more actionable than static analysis because agents can use validation feedback to iteratively improve outputs.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with stitch-skills, ranked by overlap. Discovered automatically through the match graph.
Agent Skills
Open format and reference SDK for packaging reusable capabilities and expertise for AI agents. [#opensource](https://github.com/agentskills/agentskills)
openclaw-superpowers
44 plug-and-play skills for OpenClaw — self-modifying AI agent with cron scheduling, security guardrails, persistent memory, knowledge graphs, and MCP health monitoring. Your agent teaches itself new behaviors during conversation.
OpenMontage
World's first open-source, agentic video production system. 12 pipelines, 52 tools, 500+ agent skills. Turn your AI coding assistant into a full video production studio.
CrewAI
Multi-agent orchestration — role-playing agents with tasks, processes, tools, memory, and delegation.
everything-claude-code
The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
babysitter
Babysitter enforces obedience on agentic workforces and enables them to manage extremely complex tasks and workflows through deterministic, hallucination-free self-orchestration
Best For
- ✓teams using multiple AI coding agents (Cursor, Claude Code, Gemini CLI)
- ✓skill developers building for cross-agent compatibility
- ✓enterprises standardizing on agent-agnostic skill ecosystems
- ✓skill developers building for multiple agent platforms
- ✓teams establishing consistent skill quality standards
- ✓organizations standardizing on open skill ecosystems
- ✓skill developers wanting to guide agent behavior through examples
- ✓teams establishing coding standards and patterns for generated code
Known Limitations
- ⚠requires agents to be installed and discoverable via standard filesystem paths
- ⚠no support for agents with custom installation directories or containerized deployments
- ⚠agent detection is filesystem-based, not API-based, so may fail with non-standard setups
- ⚠agents must implement parsing logic for SKILL.md format; no guarantee of consistent interpretation across agent implementations
- ⚠validation scripts are agent-agnostic but may require agent-specific wrappers for execution
- ⚠no built-in versioning or backward compatibility mechanism for skill format evolution
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Mar 27, 2026
About
A library of Agent Skills designed to work with the Stitch MCP server. Each skill follows the Agent Skills open standard, for compatibility with coding agents such as Antigravity, Gemini CLI, Claude Code, Cursor.
Categories
Alternatives to stitch-skills
Are you the builder of stitch-skills?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →