Evidently AIFramework44/100
via “llm output evaluation with semantic and statistical metrics”
ML/LLM monitoring — data drift, model quality, 100+ metrics, dashboards, test suites.
Unique: Uses a descriptor-based architecture where text features are extracted as row-level transformations (Descriptor subclasses) that generate new columns, which are then aggregated into batch metrics. This separates feature extraction from aggregation, enabling reuse of descriptors across different metrics and composition of complex evaluation pipelines without duplicating NLP logic.
vs others: More flexible than prompt-based evaluation (e.g., LLM-as-judge) because descriptors can combine multiple signals (embeddings, heuristics, external models) without repeated API calls; more comprehensive than single-metric tools because the descriptor system enables composition of semantic, statistical, and reference-based signals.