drawio-mcp-server vs IntelliCode
Side-by-side comparison to help you choose.
| Feature | drawio-mcp-server | IntelliCode |
|---|---|---|
| Type | MCP Server | Extension |
| UnfragileRank | 33/100 | 40/100 |
| Adoption | 0 | 1 |
| Quality | 0 | 0 |
| Ecosystem |
| 0 |
| 0 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 12 decomposed | 6 decomposed |
| Times Matched | 0 | 0 |
Implements the Model Context Protocol (MCP) specification to expose Draw.io as a callable tool interface for LLM clients like Claude Desktop and oterm. The server receives structured tool calls from MCP clients, translates them into Draw.io operations via a WebSocket-connected browser extension, and returns structured responses back through the MCP protocol. Uses the @modelcontextprotocol/sdk (v1.10.1) for protocol implementation and event-driven message routing through Node.js EventEmitter.
Unique: Uses event-driven architecture with decoupled message bus (bus_request_stream and bus_reply_stream) to separate MCP protocol handling from WebSocket communication, enabling bidirectional LLM-to-Draw.io integration without direct API access
vs alternatives: First MCP server for Draw.io, enabling native integration with Claude and other MCP clients without requiring custom API wrappers or REST middleware
Operates a uWebSockets.js server on port 3000 that maintains persistent WebSocket connections with the Draw.io MCP Browser Extension, enabling real-time bidirectional message exchange. Commands from MCP clients are queued and sent to the extension, which executes them in the Draw.io DOM context and returns results asynchronously. The event bus (Node.js EventEmitter) decouples incoming MCP requests from outgoing WebSocket messages, allowing multiple concurrent diagram operations.
Unique: Uses uWebSockets.js (high-performance C++ WebSocket library) with event-driven message bus decoupling to handle concurrent MCP requests without blocking browser extension communication, enabling non-blocking async operation queuing
vs alternatives: Faster and more responsive than polling-based approaches; event-driven architecture prevents head-of-line blocking when multiple diagram operations are queued simultaneously
Manages WebSocket connection lifecycle with the Draw.io MCP Browser Extension, including initial handshake, connection validation, and graceful disconnection handling. When the extension connects, the server validates the connection, registers event listeners for incoming messages, and begins routing MCP requests to the extension. On disconnection, the server cleans up event listeners and queues pending operations for retry or failure notification to MCP clients.
Unique: Implements explicit handshake validation with the browser extension to ensure protocol compatibility before routing MCP requests, preventing invalid operations on incompatible extension versions
vs alternatives: Handshake validation catches version mismatches early; cleaner than silent failures when extension protocol changes
Maintains a registry of available tools (add-rectangle, update-cell-properties, delete-cell, etc.) with their schemas, descriptions, and input/output specifications. When an MCP client connects, the server exposes this tool registry through the MCP protocol, allowing clients to discover available operations and their parameters. Tools are dynamically loaded from the tool system and registered with their zod schemas, enabling MCP clients to understand tool capabilities without hardcoding.
Unique: Exposes tool registry through MCP protocol with full schema information, enabling LLM clients to understand tool capabilities and constraints without external documentation
vs alternatives: Dynamic tool discovery is more flexible than hardcoded tool lists; schema exposure enables LLM agents to generate valid tool calls without trial-and-error
Provides tools to query the current state of a Draw.io diagram without modifying it: get-selected-cell retrieves properties of the currently selected element, get-shape-categories lists available shape libraries, get-shapes-in-category enumerates shapes within a category, and get-shape-by-name finds specific shapes by name. These tools execute read-only queries through the WebSocket connection to the browser extension, which accesses the Draw.io DOM to extract metadata and return structured JSON responses.
Unique: Implements read-only query tools that execute in the Draw.io DOM context through the browser extension, providing direct access to diagram metadata without requiring diagram export or serialization
vs alternatives: Faster than exporting and parsing diagram XML; provides real-time access to current diagram state without round-tripping through file I/O
Provides tools to create diagram elements (rectangles, circles, diamonds, text, connectors) with validated properties using zod schema validation. Tools like add-rectangle, add-circle, add-diamond, add-text, and add-connector accept structured input parameters (position, size, style, label, connections) that are validated against predefined schemas before being sent to the Draw.io extension. The extension executes the creation in the Draw.io DOM and returns the created element's ID and properties.
Unique: Uses zod schema validation to enforce input correctness before WebSocket transmission, preventing invalid diagram operations from reaching the browser extension and reducing round-trip error handling
vs alternatives: Schema validation at the server layer catches errors early and provides clear error messages to LLM clients; faster than trial-and-error approaches where invalid operations are sent to Draw.io and rejected
Provides tools to modify existing diagram elements after creation: update-cell-properties changes properties of a selected or specified element (label, style, position, size), delete-cell removes elements from the diagram, and style-cell applies predefined or custom styling. Modifications are sent through the WebSocket connection to the browser extension, which updates the Draw.io DOM and returns confirmation with updated element state. Uses event-driven message routing to queue modifications and handle asynchronous responses.
Unique: Separates element creation from modification into distinct tools, allowing LLM agents to create a diagram structure first, then refine properties in a second pass without re-creating elements
vs alternatives: Enables iterative diagram refinement without full diagram regeneration; more efficient than recreating elements when only properties change
Provides the add-connector tool to create connections between diagram elements with validated source and target element IDs. The tool accepts source element ID, target element ID, and optional label/style properties, validates the IDs exist, and sends the connector creation request through WebSocket to the Draw.io extension. The extension creates the connector in the DOM and returns the connector's ID and properties, enabling programmatic relationship mapping in diagrams.
Unique: Validates element IDs before sending connector creation request, preventing orphaned connectors and ensuring diagram structural integrity at the server layer
vs alternatives: Server-side validation prevents invalid connectors from being created in Draw.io; reduces error handling complexity in LLM agents by failing fast with clear error messages
+4 more capabilities
Provides AI-ranked code completion suggestions with star ratings based on statistical patterns mined from thousands of open-source repositories. Uses machine learning models trained on public code to predict the most contextually relevant completions and surfaces them first in the IntelliSense dropdown, reducing cognitive load by filtering low-probability suggestions.
Unique: Uses statistical ranking trained on thousands of public repositories to surface the most contextually probable completions first, rather than relying on syntax-only or recency-based ordering. The star-rating visualization explicitly communicates confidence derived from aggregate community usage patterns.
vs alternatives: Ranks completions by real-world usage frequency across open-source projects rather than generic language models, making suggestions more aligned with idiomatic patterns than generic code-LLM completions.
Extends IntelliSense completion across Python, TypeScript, JavaScript, and Java by analyzing the semantic context of the current file (variable types, function signatures, imported modules) and using language-specific AST parsing to understand scope and type information. Completions are contextualized to the current scope and type constraints, not just string-matching.
Unique: Combines language-specific semantic analysis (via language servers) with ML-based ranking to provide completions that are both type-correct and statistically likely based on open-source patterns. The architecture bridges static type checking with probabilistic ranking.
vs alternatives: More accurate than generic LLM completions for typed languages because it enforces type constraints before ranking, and more discoverable than bare language servers because it surfaces the most idiomatic suggestions first.
IntelliCode scores higher at 40/100 vs drawio-mcp-server at 33/100. drawio-mcp-server leads on quality and ecosystem, while IntelliCode is stronger on adoption.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Trains machine learning models on a curated corpus of thousands of open-source repositories to learn statistical patterns about code structure, naming conventions, and API usage. These patterns are encoded into the ranking model that powers starred recommendations, allowing the system to suggest code that aligns with community best practices without requiring explicit rule definition.
Unique: Leverages a proprietary corpus of thousands of open-source repositories to train ranking models that capture statistical patterns in code structure and API usage. The approach is corpus-driven rather than rule-based, allowing patterns to emerge from data rather than being hand-coded.
vs alternatives: More aligned with real-world usage than rule-based linters or generic language models because it learns from actual open-source code at scale, but less customizable than local pattern definitions.
Executes machine learning model inference on Microsoft's cloud infrastructure to rank completion suggestions in real-time. The architecture sends code context (current file, surrounding lines, cursor position) to a remote inference service, which applies pre-trained ranking models and returns scored suggestions. This cloud-based approach enables complex model computation without requiring local GPU resources.
Unique: Centralizes ML inference on Microsoft's cloud infrastructure rather than running models locally, enabling use of large, complex models without local GPU requirements. The architecture trades latency for model sophistication and automatic updates.
vs alternatives: Enables more sophisticated ranking than local models without requiring developer hardware investment, but introduces network latency and privacy concerns compared to fully local alternatives like Copilot's local fallback.
Displays star ratings (1-5 stars) next to each completion suggestion in the IntelliSense dropdown to communicate the confidence level derived from the ML ranking model. Stars are a visual encoding of the statistical likelihood that a suggestion is idiomatic and correct based on open-source patterns, making the ranking decision transparent to the developer.
Unique: Uses a simple, intuitive star-rating visualization to communicate ML confidence levels directly in the editor UI, making the ranking decision visible without requiring developers to understand the underlying model.
vs alternatives: More transparent than hidden ranking (like generic Copilot suggestions) but less informative than detailed explanations of why a suggestion was ranked.
Integrates with VS Code's native IntelliSense API to inject ranked suggestions into the standard completion dropdown. The extension hooks into the completion provider interface, intercepts suggestions from language servers, re-ranks them using the ML model, and returns the sorted list to VS Code's UI. This architecture preserves the native IntelliSense UX while augmenting the ranking logic.
Unique: Integrates as a completion provider in VS Code's IntelliSense pipeline, intercepting and re-ranking suggestions from language servers rather than replacing them entirely. This architecture preserves compatibility with existing language extensions and UX.
vs alternatives: More seamless integration with VS Code than standalone tools, but less powerful than language-server-level modifications because it can only re-rank existing suggestions, not generate new ones.