via “llm-as-a-judge evaluation with job scheduling and result aggregation”
Open-source LLM observability — tracing, prompt management, evaluation, cost tracking, self-hosted.
Unique: Evaluation jobs are decoupled from trace ingestion via a queue system, enabling asynchronous evaluation without blocking trace writes. Job execution includes automatic retry logic with exponential backoff, and results are stored in PostgreSQL with foreign keys to traces, enabling correlation between evaluation scores and trace characteristics (latency, cost, model, etc.).
vs others: More scalable than manual annotation because it batches evaluation requests and distributes them across worker processes, and integrates evaluation results directly into the trace database for instant correlation with other metrics, whereas external evaluation tools require data export and re-import.