codeburn
RepositoryFreeSee where your AI coding tokens go. Interactive TUI dashboard for Claude Code, Codex, and Cursor cost observability.
Capabilities12 decomposed
multi-provider session log discovery and parsing
Medium confidenceAutomatically locates and parses session logs from Claude Code, Cursor, GitHub Copilot, Codex, and other AI coding tools by scanning platform-specific directories (~/.claude, ~/.config, etc.). Implements a provider plugin system with standardized parsers that convert heterogeneous log formats into a unified ParsedTurn and Session object model, enabling downstream analysis across multiple tools without manual configuration.
Implements a provider plugin architecture that decouples provider-specific parsing logic from the core analysis engine, allowing new providers to be added via standardized interfaces (discoverAllSessions, parseSessionFile) without modifying core code. Uses LiteLLM's pricing database as the canonical source for model cost data across 100+ models.
Supports 5+ AI coding tools natively with a pluggable architecture, whereas most token trackers are single-tool specific or require API proxies that add latency and privacy concerns.
turn classification and task categorization engine
Medium confidenceAnalyzes parsed session turns and classifies them into TaskCategory buckets (coding, testing, terminal usage, debugging, etc.) using heuristic rules based on turn content, tool invocations, and file types. Implements a classifyTurn function that examines API calls, file modifications, and context patterns to assign semantic meaning to raw token consumption, enabling cost breakdown by activity type rather than just by model.
Uses multi-signal heuristic classification (file types, tool invocations, context patterns) rather than simple keyword matching, enabling semantic understanding of turn purpose. Tracks one-shot success rate per task category to identify which activity types benefit most from AI assistance.
Provides task-level cost visibility that generic token counters cannot offer, allowing developers to optimize by activity type rather than just by model or project.
status reporting and session metadata inspection
Medium confidenceProvides CLI commands (codeburn status, codeburn report) that generate detailed reports on session discovery status, parsing errors, and data quality metrics. Implements metadata inspection capabilities that allow developers to examine individual session files, view parsing errors, and understand data completeness. Generates status summaries showing how many sessions were discovered, parsed successfully, and skipped due to errors.
Provides transparent visibility into the data ingestion pipeline, showing exactly which sessions were discovered, parsed, and skipped with detailed error messages. Enables developers to audit data quality before relying on cost calculations.
Offers detailed status and error reporting that helps developers understand data completeness, whereas black-box tools that silently skip sessions make it difficult to detect data quality issues.
provider plugin system with extensible architecture
Medium confidenceImplements a plugin-based architecture that allows new AI coding providers to be added without modifying core CodeBurn code. Each provider plugin implements standardized interfaces (discoverAllSessions, parseSessionFile) that return normalized ParsedTurn and Session objects. Plugins are loaded dynamically at runtime and can be distributed as npm packages, enabling community contributions and custom provider support.
Defines a minimal, standardized plugin interface (discoverAllSessions, parseSessionFile) that decouples provider-specific logic from the core analysis engine, enabling community contributions without core code changes. Plugins are loaded dynamically at runtime.
Enables extensibility without forking or modifying core code, whereas monolithic tools that hardcode provider support require core maintainers to add each new provider.
accurate cost calculation with litellm pricing integration
Medium confidenceCalculates USD costs for each turn by multiplying token counts (input + output) by model-specific pricing rates sourced from LiteLLM's pricing database, which covers 100+ models across OpenAI, Anthropic, and other providers. Implements a calculateCost function that handles variable pricing tiers, currency conversion, and subscription plan adjustments (e.g., Claude Pro discounts), ensuring accurate financial visibility without requiring API calls to pricing services.
Integrates LiteLLM's comprehensive pricing database as a built-in data source rather than requiring external API calls, enabling offline cost calculation and eliminating latency. Handles subscription plan adjustments (Claude Pro discounts) and multi-currency support natively.
Provides accurate, offline cost calculation across 100+ models without API dependencies, whereas most token trackers either hardcode pricing or require cloud lookups that add latency and privacy exposure.
interactive terminal ui dashboard with real-time metrics
Medium confidenceRenders a terminal-based interactive dashboard (TUI) using a framework like Ink or Blessed that displays aggregated token usage, costs, and efficiency metrics across multiple time periods (Today, 7 Days, 30 Days, All Time). Implements keyboard-driven navigation, filtering by project/model/task category, and drill-down capabilities that allow developers to explore cost patterns without leaving the terminal. Updates metrics in real-time as new session data is discovered.
Implements a keyboard-driven TUI dashboard that runs entirely in the terminal without external dependencies, enabling cost monitoring in headless environments and SSH sessions. Provides drill-down navigation from aggregate metrics to individual turns without context switching.
Offers a native terminal experience for developers who live in the CLI, whereas web-based dashboards require browser context switching and are inaccessible in SSH/headless environments.
daily aggregation and time-period bucketing with caching
Medium confidenceAggregates parsed session turns into daily buckets and higher-level time periods (7 Days, 30 Days, All Time) using an aggregateProjectsIntoDays function that groups by date, project, and model. Implements a caching layer that stores aggregated results to avoid recomputing statistics on every dashboard load, with cache invalidation triggered by new session data discovery. Supports efficient querying of cost trends across arbitrary time windows.
Implements a two-level aggregation strategy (daily buckets + period summaries) with intelligent cache invalidation that rebuilds only affected time periods when new sessions are discovered, avoiding full recomputation. Uses immutable daily aggregates as the foundation for all higher-level queries.
Provides fast metric queries even with large datasets by pre-aggregating and caching, whereas naive approaches that recalculate from raw turns on every query become slow with 1000+ turns.
token burn pattern detection and optimization recommendations
Medium confidenceScans session history to identify inefficient token usage patterns such as redundant file reads, bloated context windows, unused MCP tool invocations, and low one-shot success rates. Implements an optimization engine (codeburn optimize) that analyzes turn sequences, detects repeated operations on the same files, and generates actionable recommendations to reduce token waste. Uses heuristic rules and statistical analysis to flag anomalies in token consumption.
Analyzes turn sequences and file access patterns to detect structural inefficiencies (e.g., reading the same file 5 times in a single session) rather than just flagging high token counts. Tracks one-shot success rate as a proxy for efficiency and correlates it with context size and tool usage.
Provides actionable optimization recommendations based on actual usage patterns, whereas generic cost-cutting advice (e.g., 'use smaller models') ignores the specific inefficiencies in a developer's workflow.
model comparison and cost-effectiveness analysis
Medium confidenceImplements a model comparison engine (codeburn compare) that analyzes token usage and costs across different AI models used in the same session history, calculating metrics like cost-per-successful-turn, average context size, and one-shot success rate per model. Generates comparison matrices and visualizations that help developers understand which models are most cost-effective for their specific workflow, accounting for both raw cost and task completion efficiency.
Correlates cost with task completion efficiency (one-shot success rate) rather than just comparing raw token costs, enabling developers to make informed model choices based on actual productivity impact. Supports task-category-specific comparisons to account for model strengths in different domains.
Provides cost-effectiveness analysis that accounts for task completion quality, whereas simple cost comparisons ignore that a cheaper model may require more retries and ultimately cost more.
csv and structured data export with custom filtering
Medium confidenceExports aggregated cost data and detailed turn records to CSV and JSON formats with support for custom filtering by project, model, task category, date range, and other dimensions. Implements an exportCsv function that generates spreadsheet-compatible output suitable for further analysis in Excel, Google Sheets, or data analysis tools. Supports both summary-level exports (daily aggregates) and detailed exports (individual turns with full metadata).
Supports multi-dimensional filtering at export time, allowing users to generate custom reports without modifying the underlying data. Provides both summary and detailed export modes to support different use cases (executive summaries vs detailed analysis).
Enables seamless integration with spreadsheet and BI tools by providing standard export formats, whereas tools that only offer web dashboards force users to manually copy data or use screen scraping.
macos menubar application with persistent monitoring
Medium confidenceProvides a native macOS menubar application that displays real-time token usage and cost metrics in the system menu bar, with a dropdown menu showing today's spend, weekly trends, and quick access to the full dashboard. Implements app lifecycle management, state persistence, and background monitoring that updates metrics as new session data is discovered. Uses native macOS UI frameworks for seamless integration with the operating system.
Integrates directly with macOS system UI as a menubar application, providing always-visible cost monitoring without requiring a separate window or terminal. Implements persistent state management and background updates to keep metrics current.
Offers native macOS integration that feels like a first-class system application, whereas web dashboards or terminal UIs feel like external tools and require explicit context switching.
subscription plan and currency configuration management
Medium confidenceManages subscription plan settings (free, Claude Pro, enterprise) and currency preferences (USD, EUR, GBP, etc.) that affect cost calculations and display. Implements a configuration system that stores user preferences persistently and applies subscription-specific discounts (e.g., Claude Pro 20% reduction) and currency conversion to all cost metrics. Supports multi-currency display for international teams.
Integrates subscription plan discounts directly into cost calculations rather than treating them as post-hoc adjustments, ensuring all metrics reflect actual out-of-pocket costs. Supports multi-currency display for international teams.
Provides accurate cost tracking for paid subscribers by applying discounts at calculation time, whereas tools that only show list prices overstate costs for Claude Pro users by ~20%.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with codeburn, ranked by overlap. Discovered automatically through the match graph.
@metorial/mcp-session
MCP session management for Metorial. Provides session handling and tool lifecycle management for Model Context Protocol.
coze-studio
An AI agent development platform with all-in-one visual tools, simplifying agent creation, debugging, and deployment like never before. Coze your way to AI Agent creation.
cc-switch
A cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode, openclaw & Gemini CLI.
langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
octocode-mcp
MCP server for semantic code research and context generation on real-time using LLM patterns | Search naturally across public & private repos based on your permissions | Transform any accessible codebase/s into AI-optimized knowledge on simple and complex flows | Find real implementations and live d
R mcptools
** - An R SDK for creating R-based MCP servers and retrieving functionality from third-party MCP servers as R functions.
Best For
- ✓developers using multiple AI coding assistants (Claude Code + Cursor + Copilot)
- ✓teams auditing AI tool usage across different IDEs
- ✓tool builders extending CodeBurn with custom provider support
- ✓developers optimizing their AI coding workflow by task type
- ✓engineering managers analyzing team AI tool usage patterns
- ✓researchers studying how developers interact with AI coding assistants
- ✓developers troubleshooting CodeBurn setup or data discovery issues
- ✓teams auditing data quality before relying on CodeBurn for cost reporting
Known Limitations
- ⚠Requires local access to session log directories — cannot parse logs from cloud-only tools without local caching
- ⚠Parser accuracy depends on provider log format stability — breaking changes in IDE session formats require parser updates
- ⚠No real-time streaming of logs — only processes completed session files on disk
- ⚠Classification is heuristic-based and may misclassify edge cases (e.g., debugging-heavy coding tasks)
- ⚠Requires sufficient turn context to classify accurately — minimal turns with sparse metadata may default to generic categories
- ⚠No machine learning model — classification rules are hand-crafted and may not capture domain-specific task patterns
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 21, 2026
About
See where your AI coding tokens go. Interactive TUI dashboard for Claude Code, Codex, and Cursor cost observability.
Categories
Alternatives to codeburn
Are you the builder of codeburn?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →