real-world conversation dataset collection and curation
Aggregates over 1 million authentic user conversations with ChatGPT and GPT-4 captured through a research chatbot interface, preserving full conversation threads with metadata including timestamps, user demographics (country, browser type), and conversation-level toxicity annotations. The dataset captures genuine, unfiltered user intents across diverse domains without synthetic generation or prompt engineering, enabling analysis of actual AI usage patterns in production environments.
Unique: Captures unfiltered, real-world conversations from production ChatGPT/GPT-4 deployments rather than synthetic or crowdsourced data, preserving authentic user intents, failure modes, and edge cases with demographic metadata (country, browser) enabling stratified analysis across user populations
vs alternatives: Larger scale (1M+ conversations) and more authentic than crowdsourced datasets like ShareGPT, with explicit demographic metadata absent from most open conversation corpora, though less curated and safety-filtered than instruction-tuning datasets like FLAN or Alpaca
demographic-stratified conversation analysis and filtering
Enables filtering and analysis of conversations by user demographics (country, browser type) and conversation-level metadata, allowing researchers to slice the dataset by geographic region, device type, or other user attributes. The dataset structure preserves demographic fields as queryable attributes, supporting cohort analysis, geographic bias detection, and population-specific model evaluation without requiring external demographic inference.
Unique: Provides explicit demographic metadata (country, browser) at conversation level, enabling direct stratified analysis without requiring external demographic inference or proxy models, though limited to coarse-grained attributes compared to crowdsourced alternatives
vs alternatives: More direct demographic stratification than ShareGPT or other conversation corpora, though less granular than purpose-built fairness datasets with rich demographic annotations
toxicity annotation and content safety labeling
Provides conversation-level toxicity labels assigned through automated or human annotation, enabling researchers to identify and filter harmful content, study safety patterns, and train content moderation models. Labels are attached at the conversation level (not per-message), allowing downstream filtering of unsafe conversations or stratified analysis of toxicity distribution across user demographics and conversation types.
Unique: Provides real-world toxicity annotations from production ChatGPT/GPT-4 conversations rather than synthetic or crowdsourced toxic examples, capturing authentic harmful content patterns without artificial prompt engineering, though at conversation-level granularity rather than message-level
vs alternatives: More authentic toxicity examples than synthetic safety datasets, though coarser-grained labeling and less detailed harm taxonomy than purpose-built safety datasets like ToxiGen or RealToxicityPrompts
multilingual conversation corpus extraction and analysis
Provides access to non-English conversations within the dataset, enabling analysis of how users in different languages interact with English-trained LLMs and supporting training of multilingual or cross-lingual models. Conversations are preserved in original language with metadata indicating language or country of origin, allowing language-specific filtering and comparative analysis across linguistic communities.
Unique: Includes real-world multilingual conversations from production ChatGPT/GPT-4 deployments, capturing authentic non-English user interactions and code-switching patterns, though limited in coverage and requiring language detection for explicit language identification
vs alternatives: More authentic multilingual examples than synthetic multilingual datasets, though smaller and less balanced than purpose-built multilingual corpora like FLORES or mC4
conversation metadata extraction and temporal analysis
Provides structured metadata for each conversation including timestamps, conversation IDs, user IDs, and conversation length, enabling temporal analysis of usage patterns, trend detection, and time-series studies of how user needs and LLM interactions evolved. Metadata is queryable and filterable, supporting cohort analysis by time period and correlation analysis between temporal patterns and conversation characteristics.
Unique: Preserves conversation-level timestamps from production ChatGPT/GPT-4 deployments, enabling temporal analysis of real-world usage evolution without synthetic time-shifting, though limited to conversation-level granularity without turn-level timing
vs alternatives: More authentic temporal data than synthetic datasets, though coarser-grained than specialized time-series conversation corpora with explicit turn-level timestamps
domain and use-case diversity sampling and stratification
Provides conversations spanning diverse user intents and domains (coding help, creative writing, sensitive topics, general Q&A, etc.) captured from real users without prompt engineering, enabling researchers to sample representative conversations across use cases and train models on realistic domain distributions. The dataset's scale and authenticity allow stratified sampling by inferred domain or use case without requiring explicit domain labels.
Unique: Captures authentic domain diversity from real ChatGPT/GPT-4 users without synthetic prompt engineering, preserving natural distribution of use cases and user intents, though requiring post-hoc domain inference rather than explicit labels
vs alternatives: More authentic domain diversity than synthetic instruction-tuning datasets, though less explicitly labeled and curated than purpose-built domain-specific corpora
conversation metadata extraction and statistical summarization
The dataset includes structured metadata for each conversation (user demographics, browser/device info, conversation length, timestamps, toxicity labels) that can be extracted and aggregated for statistical analysis. Researchers can compute summary statistics (e.g., average conversation length by country, toxicity prevalence by domain) without processing full conversation text, enabling efficient exploratory analysis and dataset characterization. Metadata is stored in queryable fields, supporting both individual record lookup and bulk aggregation.
Unique: Provides structured metadata fields (country, browser, device, toxicity label) linked to each conversation, enabling efficient statistical summarization without processing full conversation text. Metadata is captured at collection time, preserving temporal and contextual information.
vs alternatives: More efficient for statistical analysis than processing full conversation text, but metadata quality and completeness are not explicitly documented compared to explicitly validated datasets
instruction-following and user intent distribution analysis
The dataset captures authentic user requests and model responses, enabling analysis of instruction-following patterns, user intent distribution, and how well models address diverse user needs. Researchers can analyze which types of instructions users provide, how models interpret and respond to them, and where misalignment or misunderstanding occurs. This supports studying instruction-following quality, identifying common user frustrations, and understanding the diversity of real-world use cases beyond typical benchmarks.
Unique: Captures authentic user instructions and model responses from production ChatGPT/GPT-4 deployments, reflecting real instruction-following challenges and user intent distribution rather than synthetic instruction-tuning data. Includes edge cases and sensitive topics that users genuinely request.
vs alternatives: More representative of real-world instruction-following patterns than synthetic instruction-tuning datasets, but lacks explicit success metrics or user satisfaction labels compared to explicitly validated instruction-following benchmarks
+1 more capabilities