mcp protocol-compliant issue creation with team-scoped context
Creates new Linear issues through MCP tool invocation by translating LLM natural language requests into Linear API mutations. The server validates required parameters (title, teamId) and optional fields (description, priority, status), then queues the request through a rate-limited client that enforces Linear's 1400 requests/hour limit. Returns structured issue metadata including ID, URL, and status for LLM context.
Unique: Implements MCP tool schema with Linear-specific parameter validation and rate-limit-aware queueing, ensuring LLM requests respect API quotas without blocking the client. Uses LinearMCPClient abstraction to decouple protocol handling from API integration.
vs alternatives: Simpler than building custom Linear integrations because it handles MCP protocol translation and rate limiting automatically, while remaining more flexible than Linear's native Slack/GitHub integrations by supporting any MCP-compatible LLM client.
multi-filter issue search with query expansion and team scoping
Searches Linear issues using a query string combined with optional filters (teamId, status, assigneeId, labels, priority) by translating them into Linear GraphQL queries. The server constructs parameterized queries that filter across multiple dimensions simultaneously, returning paginated results with issue metadata. Supports both full-text search on title/description and structured filtering on issue properties.
Unique: Combines full-text search with structured filtering through a single MCP tool, allowing LLMs to express complex queries naturally ('find open bugs assigned to me') without requiring users to learn Linear's filter syntax. Rate limiter ensures search requests don't exhaust API quota.
vs alternatives: More flexible than Linear's built-in saved views because it accepts dynamic filter parameters from LLM context, and simpler than building custom GraphQL clients because the MCP server handles query construction and pagination.
mcp protocol translation with stdio transport and tool schema exposure
Implements the Model Context Protocol (MCP) server specification by handling MCP requests (list resources, read resource, list tools, call tool) from LLM clients via stdio transport. The server translates MCP tool invocations into LinearMCPClient method calls and formats responses back to the protocol format. Exposes tool schemas that describe available operations and their parameters to the LLM client.
Unique: Implements full MCP server specification with stdio transport, enabling seamless integration with Claude Desktop and other MCP-compatible clients. Tool schemas are statically defined but cover all major Linear operations.
vs alternatives: Simpler than building custom REST APIs because MCP handles protocol translation automatically, and more flexible than Linear's native integrations because it works with any MCP-compatible LLM client.
structured error handling and response formatting for llm consumption
Handles errors from Linear API calls and formats them as MCP-compliant error responses that LLMs can interpret. The server catches API errors (authentication failures, invalid parameters, rate limit errors) and serializes them with descriptive messages and error codes. Ensures that LLM clients receive actionable error information rather than raw API responses.
Unique: Translates Linear API errors into MCP-compliant error responses with descriptive messages, enabling LLM clients to understand failures without exposing raw API details. Error handling is transparent to MCP tools.
vs alternatives: More user-friendly than raw API errors because it provides MCP-formatted messages, and simpler than building custom error recovery because it delegates retry logic to the LLM client.
resource template definition for issue context
Defines MCP resource templates that allow clients to request issue data using URI patterns (e.g., 'linear://issue/{issueId}'), enabling LLMs to reference issues as persistent resources rather than one-off API calls. The server implements resource reading that fetches issue details when a client requests a resource URI, integrating issue context into the LLM's knowledge base.
Unique: Implements MCP resource templates for issues, allowing LLMs to treat Linear issues as first-class resources in the conversation context rather than requiring explicit tool calls
vs alternatives: More seamless than tool-based issue fetching because users can paste issue URIs directly; simpler than building a separate context manager because it leverages MCP's native resource protocol
issue update with selective field mutation and conflict avoidance
Updates existing Linear issues by accepting an issue ID and a set of fields to modify (title, description, priority, status, assignee). The server constructs targeted GraphQL mutations that update only specified fields, avoiding unnecessary API calls or conflicts from partial updates. Returns the updated issue state to confirm changes to the LLM client.
Unique: Implements selective field updates through GraphQL mutations rather than full-object replacement, reducing API payload size and avoiding unnecessary field overwrites. Rate limiter queues mutations to respect Linear's request limits.
vs alternatives: More granular than Linear's REST API because it updates only specified fields, and safer than direct GraphQL access because the MCP server validates field names and types before submission.
user-scoped issue retrieval with archive filtering and pagination
Retrieves all issues assigned to a specific user by querying the Linear API with userId and optional filters (includeArchived, limit). The server constructs a GraphQL query that fetches the user's issue list with metadata, supporting pagination through limit parameters. Returns issues in a format suitable for LLM processing (title, status, priority, team, URL).
Unique: Provides a dedicated user-scoped query path that's more efficient than generic search for the common case of 'show me my issues', with built-in archive filtering to distinguish active from historical work. Integrates with rate limiter to queue requests.
vs alternatives: Simpler than building custom GraphQL queries because it abstracts away Linear's schema, and more efficient than searching by assigneeId because it's optimized for the single-user case.
issue comment addition with optional user attribution and icon customization
Adds comments to Linear issues by accepting an issueId, comment body, and optional parameters for user attribution (createAsUser) and display customization (displayIconUrl). The server constructs a GraphQL mutation that appends the comment to the issue's activity stream. Supports both direct comments and comments attributed to specific users or bots with custom icons.
Unique: Supports optional user attribution and custom icon URLs, enabling LLM agents to post comments that appear to come from specific users or branded bots. Rate limiter queues comment mutations to avoid API quota exhaustion.
vs alternatives: More flexible than Linear's native integrations because it allows custom user attribution and icon customization, and simpler than building custom GraphQL clients because the MCP server handles mutation construction.
+5 more capabilities