Beyond the Imitation Game: Quantifying and extrapolating the capabilities of lang... (BIG-bench)Product24/100 via “cross-model-capability-comparison”
* ⭐ 06/2022: [Solving Quantitative Reasoning Problems with Language Models (Minerva)](https://arxiv.org/abs/2206.14858)
Unique: BIG-bench enables comparison across models with vastly different architectures (decoder-only, encoder-decoder, multimodal) and training approaches (supervised, RLHF, instruction-tuned) because tasks are defined at the semantic level (input-output pairs) rather than assuming specific model APIs or architectures
vs others: More comprehensive than single-benchmark comparisons (e.g., MMLU leaderboards) because it reveals capability trade-offs — a model might excel at reasoning but underperform on knowledge tasks, insights invisible in single-benchmark rankings