code-graph-llm
AgentFreeCompact, language-agnostic codebase mapper for LLM token efficiency.
Capabilities9 decomposed
language-agnostic codebase graph construction
Medium confidenceBuilds a compact abstract syntax tree (AST) representation of codebases across multiple programming languages without language-specific parsers. Uses a unified graph schema to represent code structure (functions, classes, imports, dependencies) as nodes and edges, enabling consistent analysis regardless of source language. The graph is serialized into a compact format optimized for LLM token consumption.
Implements a unified graph schema that abstracts away language-specific syntax differences, allowing a single traversal and serialization pipeline to work across Python, JavaScript, Go, Java, and other languages without maintaining separate parsers for each
More token-efficient than sending raw source code or language-specific ASTs to LLMs because it strips syntax noise and represents only structural relationships, reducing context window usage by 60-80% compared to full-file inclusion
token-efficient codebase context serialization
Medium confidenceConverts the constructed code graph into a compact, LLM-friendly text representation that minimizes token count while preserving semantic relationships. Uses techniques like symbol deduplication, hierarchical summarization, and selective edge inclusion to create a serialized format that fits within LLM context windows. The output is optimized for both readability and token efficiency, enabling larger codebases to fit in a single prompt.
Implements a hierarchical summarization strategy that preserves call chains and dependency paths while aggressively deduplicating symbols and removing redundant structural information, achieving 70-90% token reduction compared to raw source code while maintaining LLM reasoning capability
More effective than naive token counting or simple truncation because it understands code structure and prioritizes semantically important relationships (imports, function signatures, class hierarchies) over syntactic details, preserving reasoning quality even at high compression ratios
dependency and import graph extraction
Medium confidenceAutomatically identifies and maps all import statements, module dependencies, and inter-file references within a codebase, building a directed graph of dependencies. Handles multiple import syntaxes (ES6 imports, CommonJS require, Python imports, Go imports, etc.) through pattern matching and heuristic analysis. Produces a queryable dependency graph that reveals code coupling, circular dependencies, and module boundaries without executing code.
Uses multi-pattern regex matching and heuristic fallback strategies to handle import syntax variations across languages, combined with optional path resolution configuration, enabling accurate dependency mapping even in polyglot codebases without language-specific tooling
Faster and more portable than language-specific tools (like npm audit or Python import analysis) because it avoids installing language runtimes and dependencies, while remaining accurate enough for architectural analysis and refactoring planning
function and class signature extraction
Medium confidenceParses and extracts function/method signatures, class definitions, and their metadata (parameters, return types, visibility modifiers, decorators) from source code across multiple languages. Uses regex-based pattern matching and lightweight AST-like analysis to identify callable entities and their interfaces without full semantic parsing. Stores signatures in a queryable format that enables LLMs to understand the public API surface of code modules.
Combines regex-based pattern matching with lightweight context-aware parsing to extract signatures while preserving parameter names, types, and decorators in a structured format that LLMs can directly use for code generation and analysis without additional parsing
More efficient than running full language-specific compilers or type checkers because it extracts only the interface layer needed for LLM reasoning, reducing overhead while maintaining sufficient detail for code generation and documentation tasks
codebase indexing and querying
Medium confidenceCreates an in-memory or persistent index of the code graph that enables fast queries for specific symbols, functions, files, or relationships. Supports queries like 'find all callers of function X', 'list all files importing module Y', or 'get the dependency chain from A to B'. Uses hash maps, adjacency lists, or similar data structures for O(1) or O(log n) lookup performance. Enables LLM agents to dynamically retrieve relevant code context based on user queries.
Implements multi-index strategy with hash maps for symbol lookup, adjacency lists for traversal, and optional reverse indices for caller/dependency queries, enabling constant-time lookups while supporting complex graph traversal operations needed for impact analysis
Faster than re-parsing or re-analyzing code on each query because the index is built once and reused, and more flexible than static analysis tools because it supports arbitrary graph queries without requiring language-specific tooling
codebase summarization and documentation generation
Medium confidenceGenerates human-readable summaries and documentation from the code graph by combining function signatures, dependency information, and structural metadata. Creates markdown or HTML documentation that describes module purposes, public APIs, and inter-module relationships. Uses the graph structure to automatically organize documentation by module hierarchy and dependency chains, reducing manual documentation effort.
Leverages the code graph structure to automatically organize documentation by module hierarchy and dependency relationships, creating hierarchical documentation that reflects actual code organization rather than requiring manual structure definition
More maintainable than manually written documentation because it's generated from the code graph and can be regenerated when code changes, and more comprehensive than docstring-based tools because it includes dependency and architecture information
multi-language code pattern recognition
Medium confidenceIdentifies common code patterns and idioms across multiple programming languages by analyzing the code graph for recurring structural motifs (e.g., factory patterns, dependency injection, middleware chains). Uses heuristic matching on function signatures, class hierarchies, and call patterns to detect design patterns without language-specific semantic analysis. Enables LLMs to understand architectural patterns and suggest refactorings based on pattern recognition.
Uses heuristic matching on structural graph properties (function signatures, call chains, class hierarchies) rather than semantic analysis, enabling pattern detection across languages while remaining computationally lightweight and not requiring language-specific tooling
More portable than language-specific linters or static analysis tools because it works across polyglot codebases, and more practical than manual code review because it automates pattern detection at scale
incremental codebase change tracking
Medium confidenceTracks changes to the codebase between versions by comparing code graphs and identifying added, modified, or removed functions, classes, imports, and dependencies. Produces a delta representation showing what changed in the code structure without requiring full re-analysis. Enables LLM agents to understand code evolution and generate change summaries or migration guides.
Compares code graphs structurally rather than performing text-based diffing, enabling accurate detection of structural changes (function additions, signature modifications, dependency changes) even when code is reformatted or reorganized
More accurate than git diff for understanding code structure changes because it identifies semantic changes (function signature modifications, import changes) rather than just line-level differences, and more useful for API versioning than text-based diffs
llm-aware context window optimization
Medium confidenceDynamically selects and prioritizes code context based on LLM token budget and query relevance, using the code graph to identify the most important symbols and relationships for a given task. Implements strategies like relevance ranking, hierarchical summarization, and selective edge inclusion to fit the most informative context within token limits. Adapts context selection based on LLM model capabilities and token limits (e.g., GPT-4 vs Claude vs open-source models).
Combines graph-based relevance ranking (identifying code most likely to be needed for a query) with token-aware compression (fitting selected context within budget), adapting to specific LLM models and their token limits rather than using generic compression
More intelligent than naive token counting or truncation because it understands code relationships and prioritizes semantically important context, and more flexible than fixed context windows because it adapts to different LLM models and token budgets
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with code-graph-llm, ranked by overlap. Discovered automatically through the match graph.
CodeGraphContext
An MCP server plus a CLI tool that indexes local code into a graph database to provide context to AI assistants.
code-review-graph
Local knowledge graph for Claude Code. Builds a persistent map of your codebase so Claude reads only what matters — 6.8× fewer tokens on reviews and up to 49× on daily coding tasks.
FileScopeMCP
** - Analyzes your codebase identifying important files based on dependency relationships. Generates diagrams and importance scores per file, helping AI assistants understand the codebase. Automatically parses popular programming languages, Python, Lua, C, C++, Rust, Zig.
OpenDevin
OpenDevin: Code Less, Make More
llm-code-highlighter
Condense source code for LLM analysis by extracting essential highlights, utilizing a simplified version of Paul Gauthier's repomap technique from Aider Chat.
Friday
AI developer assistant for Node.js
Best For
- ✓developers building LLM agents that need to understand multi-language codebases
- ✓teams using LLMs for code analysis and refactoring across heterogeneous tech stacks
- ✓builders creating AI-powered code documentation and knowledge extraction tools
- ✓developers using token-limited LLM APIs (OpenAI, Anthropic, etc.)
- ✓teams optimizing LLM API costs for large codebase analysis
- ✓builders creating LLM-powered code assistants with strict context budgets
- ✓developers analyzing legacy codebases for refactoring opportunities
- ✓teams assessing code modularity and architecture quality
Known Limitations
- ⚠Language-agnostic approach may miss language-specific semantics (e.g., Python decorators, Rust lifetimes, Go goroutines)
- ⚠Accuracy depends on consistent naming conventions and code structure across languages
- ⚠Does not perform semantic analysis or type inference — purely structural mapping
- ⚠May require post-processing for languages with non-standard syntax or DSLs
- ⚠Serialization may lose fine-grained details (e.g., exact line numbers, comments, docstrings) depending on compression level
- ⚠Trade-off between compression and information density — higher compression may reduce LLM reasoning quality
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Package Details
About
Compact, language-agnostic codebase mapper for LLM token efficiency.
Categories
Alternatives to code-graph-llm
Are you the builder of code-graph-llm?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →