HellaSwag
DatasetFree70K commonsense reasoning questions with adversarial distractors.
Capabilities5 decomposed
adversarial-filtered multiple-choice evaluation
Medium confidenceEvaluates language models on 70,000 multiple-choice questions where incorrect options were generated by language models and adversarially selected to fool machines while remaining obviously wrong to humans. The filtering process uses a two-stage approach: LLM-generated distractors are ranked by their ability to confuse models (measured via model accuracy on that specific question), then human annotators validate that the hard-for-models options remain easy for humans. This creates a dataset where model performance gaps vs human performance (95.6% human accuracy) directly measure commonsense reasoning gaps rather than dataset artifacts.
Uses adversarial filtering where distractors are selected based on measured model confusion rather than human-written plausibility, creating a dataset that specifically targets machine weaknesses while maintaining human interpretability. This two-stage LLM-generation + human-validation approach is more scalable than purely human-written distractors while maintaining higher quality than random negatives.
Harder than SWAG (predecessor) because distractors are adversarially selected for model confusion, and more human-aligned than synthetic reasoning datasets because human accuracy (95.6%) validates that hard-for-models questions remain easy for humans.
physical commonsense continuation prediction
Medium confidenceTests models' ability to predict the next action or outcome in video-like scenarios involving physical activities (cooking, sports, repairs, etc.). Each question presents a sequence of events and asks which of four options most plausibly continues the sequence. The dataset uses real-world video captions and activities, grounding commonsense in concrete physical interactions rather than abstract reasoning. Models must understand object physics, tool usage, body mechanics, and temporal causality to select correct continuations.
Grounds commonsense reasoning in real video captions and activities rather than synthetic scenarios, ensuring that correct answers reflect actual physical outcomes humans observe. The adversarial filtering specifically targets models that fail at physical reasoning while humans succeed, creating a diagnostic tool for embodied understanding gaps.
More grounded in real-world physics than abstract reasoning benchmarks like MMLU, and more challenging than simple video QA because distractors are adversarially selected to confuse models specifically about physical causality.
social and temporal reasoning evaluation
Medium confidenceAssesses models' understanding of social dynamics, conversational context, and temporal sequences in everyday scenarios. Questions test whether models can reason about social norms (what's appropriate to say/do), emotional reactions, and cause-effect relationships across time. The dataset includes scenarios involving interpersonal interactions, social etiquette, and temporal ordering of events. Adversarial distractors specifically target models that misunderstand social context or temporal logic while remaining obviously wrong to humans.
Combines social understanding with temporal reasoning in a single benchmark, testing whether models understand not just what happens next but why it happens and how humans would react. Adversarial filtering specifically targets models that fail at social reasoning while humans succeed.
More comprehensive than social bias benchmarks because it tests positive social understanding (what's appropriate) rather than just detecting bias, and more grounded than abstract reasoning datasets.
machine-vs-human performance gap analysis
Medium confidenceProvides a calibrated benchmark where human accuracy (95.6%) is known and adversarial filtering ensures that questions hard for machines remain easy for humans. This enables precise measurement of the performance gap between models and humans on commonsense reasoning. Researchers can use this gap to quantify progress toward human-level understanding and identify which types of commonsense reasoning (physical, social, temporal) show the largest model-human gaps.
Provides a human-calibrated baseline (95.6% accuracy) with adversarial filtering that ensures the gap is meaningful — questions hard for machines are easy for humans, so the gap reflects genuine commonsense reasoning deficits rather than dataset ambiguity. This enables precise measurement of progress toward human-level understanding.
More interpretable than benchmarks without human baselines because the gap directly measures commonsense reasoning deficit, and more reliable than benchmarks where hard questions are hard for both humans and machines.
dataset versioning and reproducibility
Medium confidenceProvides a fixed, versioned dataset of 70,000 examples with consistent train/validation/test splits, enabling reproducible evaluation across models and time. The dataset is hosted on Hugging Face with version control, allowing researchers to cite specific versions and ensuring that benchmark results are comparable across papers. The fixed nature of the dataset (no dynamic generation or augmentation) means that model improvements reflect genuine capability gains rather than dataset variance.
Provides a fixed, versioned dataset on Hugging Face with explicit train/validation/test splits, enabling reproducible evaluation and fair comparison across models. The fixed nature ensures that improvements reflect genuine capability gains rather than dataset variance or adversarial augmentation at test time.
More reproducible than dynamically-generated benchmarks because the dataset is fixed and versioned, and more comparable than benchmarks with multiple variants because all researchers use the same evaluation set.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with HellaSwag, ranked by overlap. Discovered automatically through the match graph.
RealWorldQA
Real-world visual QA requiring spatial reasoning.
Multiagent Debate
Implementation of a paper on Multiagent Debate
HellaSwag
Commonsense NLI with adversarial context mining
WinoGrande
44K pronoun resolution problems testing commonsense understanding.
hellaswag
Dataset by Rowan. 3,02,991 downloads.
DeepSeek: R1 Distill Qwen 32B
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new...
Best For
- ✓LLM researchers evaluating frontier models on commonsense tasks
- ✓Teams building reasoning-heavy applications who need diagnostic benchmarks
- ✓Model developers tracking regression on human-aligned understanding
- ✓Robotics teams evaluating if language models can reason about physical tasks
- ✓Video understanding researchers benchmarking temporal reasoning
- ✓Embodied AI developers testing if models understand action consequences
- ✓Conversational AI teams building socially-aware chatbots
- ✓Researchers studying if LLMs have learned social understanding or are pattern-matching
Known Limitations
- ⚠Multiple-choice format doesn't test open-ended generation or explanation quality
- ⚠Adversarial filtering is computationally expensive and may not catch all model-specific failure modes
- ⚠Dataset is English-only; cross-lingual commonsense reasoning requires separate evaluation
- ⚠70,000 examples may show saturation effects for frontier models approaching human performance
- ⚠Text-only format loses visual information that humans use for physical reasoning
- ⚠Scenarios are limited to common activities; rare or specialized physical tasks are underrepresented
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Commonsense reasoning benchmark with 70,000 multiple-choice questions requiring models to select the most plausible continuation of everyday scenarios. Uses adversarial filtering: incorrect options were generated by language models and selected specifically because they fool machines while being obvious to humans. Tests physical commonsense (what happens next in activities), social understanding, and temporal reasoning. Human accuracy is 95.6%; frontier LLMs now approach this level.
Categories
Alternatives to HellaSwag
Open-source image generation — SD3, SDXL, massive ecosystem of LoRAs, ControlNets, runs locally.
Compare →Are you the builder of HellaSwag?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →