SafetyBench
DatasetFree11K safety evaluation questions across 7 categories.
Capabilities6 decomposed
multilingual safety evaluation dataset with structured multiple-choice questions
Medium confidenceProvides 11,435 curated multiple-choice questions across 7 safety categories in both Chinese and English, with standardized JSON structure containing question ID, category, question text, 4-option choices, and ground-truth answer mappings (0->A, 1->B, 2->C, 3->D). Data is hosted on Hugging Face and downloadable via shell script or Python datasets library, enabling reproducible safety benchmarking across language variants.
Combines 11,435 questions across 7 safety categories with explicit bilingual (Chinese/English) support and category-level granularity, rather than single-language or aggregate safety scoring. Includes both full test sets and filtered subsets (test_zh_subset with 300 questions per category) to accommodate different evaluation scales.
Larger and more category-diverse than most single-language safety benchmarks, with native bilingual support enabling cross-linguistic safety analysis that monolingual datasets cannot provide.
zero-shot and few-shot evaluation harness with prompt templating
Medium confidenceImplements dual evaluation modes (zero-shot and five-shot) with carefully engineered prompt templates that present questions directly or with 5 in-context examples per category. The system constructs prompts, sends them to target models, and extracts predicted answers from model responses using configurable parsing logic. Example implementation provided in evaluate_baichuan.py demonstrates the full pipeline for any model with text generation capability.
Provides dual evaluation modes with explicit few-shot example sets (5 per category) rather than random in-context learning, enabling controlled comparison of zero-shot vs few-shot safety performance. Includes reference implementation (evaluate_baichuan.py) showing answer extraction patterns for production use.
More systematic than ad-hoc prompt engineering because it standardizes prompt templates and provides category-specific few-shot examples, enabling reproducible cross-model comparisons that single-prompt benchmarks cannot guarantee.
category-level safety performance breakdown and fine-grained analysis
Medium confidenceOrganizes 11,435 questions into 7 distinct safety categories, enabling per-category accuracy calculation and comparative analysis of model strengths/weaknesses across harm types. The evaluation pipeline computes metrics at both aggregate and category levels, allowing researchers to identify which safety domains (e.g., illegal activities, violence, bias) a model handles well vs poorly. Leaderboard submission format requires predictions per question ID, enabling automated category-level metric computation.
Explicitly structures evaluation around 7 safety categories rather than single aggregate score, enabling fine-grained analysis of model safety across specific harm domains. Leaderboard infrastructure supports category-level metric computation from per-question predictions.
More diagnostic than single-score safety benchmarks because category-level breakdown reveals which specific harm types a model handles poorly, enabling targeted safety improvements rather than generic safety training.
bilingual dataset download and curation with hugging face integration
Medium confidenceProvides dual download mechanisms (shell script via download_data.sh and Python via download_data.py using Hugging Face datasets library) to retrieve 11,435 questions in both Chinese and English from Hugging Face Hub. Data files include full test sets (test_en.json, test_zh.json), filtered Chinese subset (test_zh_subset.json with 300 questions per category), and few-shot examples (dev_en.json, dev_zh.json). Integration with Hugging Face datasets library enables programmatic access, caching, and version control.
Provides dual download mechanisms (shell script and Python library) with explicit support for filtered subsets (test_zh_subset.json) and language-specific files, rather than monolithic dataset downloads. Native Hugging Face datasets library integration enables programmatic access and caching.
More flexible than manual download because it supports both scripted and programmatic access, filtered subsets for smaller evaluations, and Hugging Face caching for faster repeated access compared to static file distribution.
leaderboard submission and standardized result formatting
Medium confidenceDefines standardized JSON submission format for leaderboard ranking: UTF-8 encoded JSON with question IDs as keys and predicted answer indices (0-3) as values. Submission infrastructure at llmbench.ai/safety accepts formatted results and computes aggregate and category-level metrics for public leaderboard ranking. Standardized format enables automated metric computation and fair cross-model comparison.
Defines explicit JSON submission format with question ID keys and answer index values (0-3 mapping), enabling automated metric computation and fair leaderboard ranking. Standardized format ensures cross-implementation comparability.
More rigorous than ad-hoc result reporting because standardized format prevents metric computation errors and enables automated leaderboard updates, whereas free-form submissions require manual validation and metric recalculation.
filtered chinese subset for resource-constrained evaluation
Medium confidenceProvides test_zh_subset.json containing 300 questions per safety category (2,100 total) filtered from full Chinese test set to remove sensitive keywords, enabling smaller-scale safety evaluation for resource-constrained scenarios. Subset maintains category balance and representativeness while reducing evaluation cost by ~82% compared to full 11,435-question dataset. Useful for rapid prototyping, continuous integration, or low-latency evaluation pipelines.
Provides explicit filtered subset (test_zh_subset.json) with 300 questions per category and sensitive keyword filtering, rather than requiring users to manually sample or filter the full dataset. Enables rapid evaluation while maintaining category balance.
More efficient than random sampling from full dataset because it provides pre-filtered, category-balanced subset with documented filtering approach, reducing evaluation time by ~82% while maintaining statistical representativeness.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with SafetyBench, ranked by overlap. Discovered automatically through the match graph.
SafetyBench Eval
11K safety evaluation questions across 7 categories.
Llama Guard 3 8B
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification)...
WildGuard
Allen AI's safety classification dataset and model.
WildBench
Real-world user query benchmark judged by GPT-4.
mmlu
Dataset by cais. 4,39,045 downloads.
Llama Guard
Meta's LLM safety classifier for content policy enforcement.
Best For
- ✓LLM researchers benchmarking model safety across languages
- ✓teams building multilingual AI systems requiring safety validation
- ✓organizations conducting compliance audits of LLM deployments
- ✓researchers comparing zero-shot vs few-shot safety performance across model families
- ✓teams evaluating proprietary or closed-source models via API
- ✓practitioners optimizing prompt engineering for safety-critical applications
- ✓safety researchers analyzing model vulnerabilities across specific harm categories
- ✓compliance teams generating detailed safety audit reports for regulators
Known Limitations
- ⚠dataset is static and fixed at 11,435 questions — no dynamic expansion or user-contributed questions
- ⚠multiple-choice format may not capture nuanced safety reasoning or edge cases requiring open-ended responses
- ⚠Chinese subset (test_zh_subset.json) is filtered to 300 questions per category, reducing statistical power for fine-grained analysis
- ⚠no built-in handling of model-specific tokenization or prompt format variations beyond provided templates
- ⚠prompt templates are fixed and may require manual tuning for specific model architectures (acknowledged in docs: 'minor changes to prompts were necessary for some models')
- ⚠answer extraction logic is model-dependent and may fail on models with unusual output formatting or reasoning chains
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Comprehensive safety evaluation benchmark for LLMs covering 11,435 multiple-choice questions across 7 safety categories in both Chinese and English, measuring model safety with fine-grained category analysis.
Categories
Alternatives to SafetyBench
The GitHub for AI — 500K+ models, datasets, Spaces, Inference API, hub for open-source AI.
Compare →FLUX, Stable Diffusion, SDXL, SD3, LoRA, Fine Tuning, DreamBooth, Training, Automatic1111, Forge WebUI, SwarmUI, DeepFake, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, Kaggle, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News,
Compare →Are you the builder of SafetyBench?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →