ROOTS
DatasetFreeBigScience's curated multilingual dataset for BLOOM.
Capabilities7 decomposed
multilingual pretraining corpus assembly with explicit language coverage
Medium confidenceROOTS provides a curated collection of 46 natural languages and 13 programming languages organized into distinct data sources with documented provenance, enabling language-balanced pretraining without requiring custom data collection. The dataset uses a source-level organization pattern where each language's data is grouped by origin (web crawls, books, code repositories, etc.), allowing trainers to inspect and weight language contributions independently during model training.
Combines explicit data governance documentation (sourcing rationale, licensing, potential biases) with source-level granularity, allowing researchers to inspect and selectively use subsets rather than treating the corpus as a black box. This architectural choice prioritizes transparency over convenience.
More transparent and auditable than Common Crawl-only datasets, with documented language selection rationale; more diverse than English-only corpora like The Pile, but smaller and more curated than raw web-scale datasets like C4
source-level data filtering and composition control
Medium confidenceROOTS organizes data into discrete sources (e.g., 'Wikipedia', 'GitHub', 'Books', 'Web Crawl') that can be independently selected, weighted, or excluded during dataset loading. This enables trainers to construct custom training mixes without re-downloading or reprocessing the entire corpus, using Hugging Face Datasets' filtering and streaming APIs to apply source-based selection at load time.
Implements source-level composition as a first-class operation rather than post-hoc filtering, allowing researchers to reason about data provenance and make deliberate choices about which sources contribute to training. This is enforced through the dataset's hierarchical structure in Hugging Face Hub.
More flexible than fixed-composition datasets like C4, but less granular than document-level filtering systems; enables reproducible data composition decisions without requiring custom preprocessing pipelines
language-stratified data access with per-language source documentation
Medium confidenceROOTS structures data with language as a primary dimension, providing separate subsets for each of 46 languages plus 13 programming languages. Each language's data includes documentation of which sources contributed, their relative proportions, and known quality/bias characteristics, enabling language-specific analysis and informed decisions about language inclusion in multilingual training.
Treats language as a structural dimension of the dataset rather than a filtering criterion, with dedicated documentation per language covering sources, proportions, and known limitations. This enables language-aware training strategies that would be difficult with language-agnostic corpora.
More language-aware than generic web-scale datasets; provides explicit documentation of language composition unlike mC4 or other derived multilingual corpora, enabling informed decisions about language inclusion
programming language corpus with source-specific code quality tiers
Medium confidenceROOTS includes 13 programming languages sourced from GitHub, Stack Overflow, and other code repositories, with implicit quality stratification based on source (e.g., GitHub stars, Stack Overflow votes). The corpus preserves source metadata allowing trainers to filter by code quality signals without requiring custom code quality evaluation, enabling code-focused model training with quality control.
Includes programming languages as a first-class data dimension with source-based quality signals (GitHub stars, Stack Overflow votes) preserved in metadata, enabling quality-aware code training without requiring external code quality evaluation systems.
More comprehensive than single-source code datasets (e.g., GitHub-only), with implicit quality signals; smaller but more curated than raw GitHub dumps, making it suitable for production model training
dataset streaming and partial loading for memory-constrained environments
Medium confidenceROOTS integrates with Hugging Face Datasets' streaming API, allowing researchers to load and process data without downloading the entire corpus to disk. Streaming uses an iterator-based pattern where documents are fetched on-demand from the Hub, enabling training on machines with limited storage while maintaining full dataset access through network I/O.
Leverages Hugging Face Datasets' streaming infrastructure to enable on-demand data access without local storage, using an iterator-based pattern that integrates seamlessly with PyTorch DataLoaders and distributed training frameworks.
More storage-efficient than downloading full datasets; comparable to other Hub-hosted datasets but with better documentation and integration for multilingual training workflows
data governance and licensing metadata with per-source attribution
Medium confidenceROOTS includes explicit licensing information and sourcing documentation for each data source, stored as structured metadata alongside the corpus. This enables automated license compliance checking and attribution generation, allowing trainers to verify that their training mix respects licensing constraints and to generate proper attribution statements for model cards.
Provides explicit per-source licensing and governance documentation as a first-class dataset feature, rather than burying it in README files. This enables programmatic license compliance checking and reproducible attribution generation.
More transparent than datasets with minimal licensing information; comparable to other BigScience datasets but more comprehensive than typical web-scale corpora which lack detailed licensing metadata
community-curated data quality annotations and bias documentation
Medium confidenceROOTS includes community-contributed annotations documenting known biases, quality issues, and limitations in specific sources, stored as structured metadata. These annotations are curated by BigScience and the research community, providing qualitative assessments of data quality and potential harms that complement quantitative metrics, enabling informed decisions about source inclusion.
Incorporates community-curated bias and quality annotations as dataset metadata, treating data governance as an ongoing collaborative process rather than a one-time curation effort. This enables researchers to make informed decisions about data inclusion based on documented concerns.
More transparent about known biases than datasets with minimal documentation; enables bias-aware training unlike datasets that treat data as neutral. Comparable to other BigScience datasets but with more extensive community input.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with ROOTS, ranked by overlap. Discovered automatically through the match graph.
MAP-Neo
Fully open bilingual model with transparent training.
RedPajama v2
30 trillion token web dataset with 40+ quality signals per document.
StarCoder Data
783 GB curated code dataset from 86 languages with PII redaction.
c4
Dataset by allenai. 6,98,456 downloads.
OPUS
Massive parallel corpus for machine translation.
Dolma
Allen AI's 3T token dataset for fully reproducible LLM training.
Best For
- ✓Research teams training multilingual foundation models
- ✓Organizations building non-English language models with transparency requirements
- ✓Teams reproducing or extending BLOOM-family models
- ✓Researchers experimenting with data composition's effect on model quality
- ✓Teams with specific licensing or quality requirements that exclude certain sources
- ✓Organizations training domain-specific models (e.g., code-focused) from a multilingual base
- ✓Multilingual model developers who need language-aware data composition
- ✓Teams building models for low-resource languages and needing to understand data limitations
Known Limitations
- ⚠Dataset is fixed and immutable — no mechanism for incremental updates or corrections post-release
- ⚠Language coverage is unbalanced; some languages have significantly less data than others due to web availability
- ⚠No built-in deduplication across sources — duplicate content may exist between web crawls and other sources
- ⚠Requires substantial storage (terabytes) and compute for full preprocessing and tokenization
- ⚠Source-level filtering is coarse-grained — cannot filter within a source (e.g., exclude low-quality Wikipedia articles)
- ⚠No built-in tools for measuring source quality or overlap; requires external analysis
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
BigScience's curated multilingual dataset used to train BLOOM, covering 46 natural languages and 13 programming languages with explicit data governance, sourcing documentation, and community-driven curation.
Categories
Alternatives to ROOTS
The GitHub for AI — 500K+ models, datasets, Spaces, Inference API, hub for open-source AI.
Compare →FLUX, Stable Diffusion, SDXL, SD3, LoRA, Fine Tuning, DreamBooth, Training, Automatic1111, Forge WebUI, SwarmUI, DeepFake, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, Kaggle, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News,
Compare →Are you the builder of ROOTS?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →