via “llm and genai evaluation with custom metrics and judges”
The open source AI engineering platform for agents, LLMs, and ML models. MLflow enables teams of all sizes to debug, evaluate, monitor, and optimize production-quality AI applications while controlling costs and managing access to models and data.
Unique: Combines reference-based metrics (ROUGE, BLEU) with LLM-as-judge evaluation in a unified framework, supporting multi-turn conversations and structured outputs. Metric plugin architecture (mlflow/metrics/genai_metrics.py) allows custom metrics without modifying core code. Evaluation results are logged as run artifacts, enabling version comparison and historical tracking.
vs others: More integrated with experiment tracking than standalone evaluation tools (DeepEval, Ragas), and supports both traditional NLP metrics and LLM-based evaluation unlike single-approach solutions