multi-task object instance annotation with polygon and rle-encoded segmentation masks
Provides 2.5 million manually-annotated object instances across 330,000 images with dual segmentation encoding: polygon coordinates for precise boundary definition and RLE (run-length encoding) for efficient storage and computation. Each instance includes bounding box coordinates in [x, y, width, height] format, category label from 80 object classes, and instance-level unique identifiers enabling per-object tracking and evaluation. Annotations are structured in JSON format with hierarchical organization linking images to annotations to categories, supporting both dense object scenes and sparse single-object images.
Unique: Dual segmentation encoding (polygon + RLE) in single dataset enables both precise boundary analysis and efficient computational workflows; 2.5M instances across 330K images provides scale unmatched by contemporaneous datasets (ImageNet had ~1.2M images, PASCAL VOC had ~11K images)
vs alternatives: Larger and more densely annotated than PASCAL VOC (11K images, ~6 objects/image) and more task-diverse than ImageNet (classification-only); RLE encoding enables 10-100x faster mask loading than polygon-only formats
human keypoint detection annotation with standardized joint coordinate system
Provides keypoint annotations for all people in images using a standardized 17-joint skeleton model (head, shoulders, elbows, wrists, hips, knees, ankles) with (x, y, visibility) tuples per joint. Visibility flag indicates whether keypoint is annotated (1), occluded (0), or outside image bounds (0). Keypoints are linked to parent person instances via instance ID, enabling pose estimation evaluation at both individual and crowd-level scales. Annotations follow COCO Keypoints task specification with consistent coordinate system across all 330K images.
Unique: Standardized 17-joint skeleton with explicit visibility flags enables robust evaluation of pose estimation under occlusion; linked to instance segmentation masks allows joint-level accuracy analysis within person bounding boxes
vs alternatives: More comprehensive than OpenPose dataset (no visibility flags) and larger scale than Human3.6M (3.6M frames vs 330K images); visibility annotations enable explicit occlusion handling unlike MPII (which lacks visibility metadata)
community-driven dataset extension and variant creation with standardized evaluation
COCO ecosystem includes community-created extensions (COCO-Stuff, COCO DensePose, COCO Panoptic) that extend base dataset with additional annotations while maintaining compatibility with COCO API and evaluation infrastructure. Extensions follow COCO format and evaluation standards, enabling seamless integration into existing pipelines. Community contributions are vetted and published as official COCO variants, ensuring quality and standardization. Variant creation process is documented, enabling researchers to create custom extensions.
Unique: Standardized extension process enables community contributions while maintaining compatibility; official variants (Stuff, DensePose, Panoptic) are vetted and published, ensuring quality and discoverability
vs alternatives: More extensible than fixed datasets; community variants enable specialized use cases without forking; standardized format prevents fragmentation unlike ad-hoc dataset variants
image-to-text caption generation dataset with 5 natural language descriptions per image
Provides 1.65 million image-caption pairs (5 captions × 330K images) with natural language descriptions written by human annotators. Each caption is a free-form English sentence describing objects, actions, and scene context without enforced length limits or structured templates. Captions are stored in JSON format linked to image IDs, enabling training of vision-language models for image captioning, visual question answering, and cross-modal retrieval. Multiple captions per image capture linguistic diversity and alternative descriptions of the same visual content.
Unique: 5 captions per image (vs 1 in most datasets) captures linguistic diversity and enables robust evaluation of caption generation variability; 1.65M caption-image pairs provide scale for training large vision-language models
vs alternatives: 5x more captions per image than Flickr30K (1 caption/image) enabling better linguistic diversity modeling; larger scale than Visual Genome (108K images) while maintaining natural language quality vs automated alt-text
semantic segmentation with 171 extended object/stuff categories via coco-stuff variant
Extends base 80 object categories with 91 additional 'stuff' categories (background materials, textures, regions like sky, grass, wall) enabling dense semantic segmentation of entire images. Stuff categories are annotated as pixel-level masks without instance boundaries — all sky pixels are labeled 'sky' regardless of continuity. COCO-Stuff combines instance segmentation (80 objects) with semantic segmentation (171 total categories including stuff), stored as single-channel PNG masks where pixel value encodes category ID. Enables panoptic segmentation evaluation combining instance and stuff predictions.
Unique: 171-category taxonomy combining 80 instance objects + 91 stuff categories enables panoptic segmentation in single dataset; pixel-level masks for stuff enable dense scene understanding without instance boundaries
vs alternatives: More comprehensive than ADE20K (150 categories) and larger scale than Cityscapes (5K images); unified instance+stuff annotation enables panoptic evaluation unlike separate semantic/instance datasets
panoptic segmentation with unified instance and stuff prediction evaluation
Combines instance segmentation (80 object categories with boundaries) and semantic segmentation (171 stuff categories without boundaries) into single panoptic prediction task. Evaluation uses Panoptic Quality (PQ) metric decomposed into Segmentation Quality (SQ — IoU of matched predictions) and Recognition Quality (RQ — detection rate). Panoptic masks encode both category ID and instance ID, enabling evaluation of both 'what' (category) and 'which' (instance identity) predictions. Standardized evaluation protocol with server-side metric computation ensures consistent benchmarking across submissions.
Unique: Panoptic Quality metric with explicit SQ/RQ decomposition enables fine-grained analysis of segmentation vs recognition errors; unified instance+stuff evaluation in single task forces models to handle both prediction types efficiently
vs alternatives: More comprehensive than separate instance/semantic benchmarks; PQ metric better captures real-world scene understanding than independent metrics; standardized evaluation prevents metric gaming unlike custom evaluation scripts
dense human surface correspondence mapping via coco densepose variant
Provides dense 2D-to-3D correspondence maps for human bodies, mapping each pixel in a person instance to a 3D human body model surface. Annotations include UV coordinates (parameterization of 3D body surface) and body part indices enabling pixel-level body surface understanding. DensePose enables training of models that predict where each image pixel corresponds to on a canonical 3D human body, useful for pose transfer, virtual try-on, and detailed human understanding. Available from 2020 dataset version onwards, extends keypoint annotations with dense surface coverage.
Unique: Dense 2D-to-3D surface correspondence enables pixel-level body understanding beyond skeleton keypoints; UV parameterization allows transfer of appearance and shape across different people and poses
vs alternatives: More detailed than keypoint-only annotations (17 joints vs millions of surface points); enables pose transfer unlike keypoint datasets; larger scale than DensePose-specific datasets
standardized evaluation metrics and leaderboard submission infrastructure
Provides standardized evaluation metrics for each task (Average Precision for detection, IoU for segmentation, OKS for keypoints, BLEU/METEOR/CIDEr for captions, PQ for panoptic) computed server-side on held-out test set. Leaderboard system accepts structured JSON result submissions in COCO format, validates format, computes metrics, and ranks submissions by primary metric. Evaluation infrastructure ensures consistent benchmarking across all submissions and prevents metric gaming through standardized computation. Metrics are task-specific: AP/AP50/AP75 for detection, mIoU for segmentation, OKS for keypoints, CIDEr for captions.
Unique: Server-side metric computation prevents metric gaming and ensures consistency; task-specific metrics (AP, OKS, CIDEr, PQ) are standardized across all submissions enabling fair comparison; public leaderboard provides transparency and reproducibility
vs alternatives: More rigorous than self-reported metrics (prevents cherry-picking); standardized evaluation prevents metric implementation variations unlike custom evaluation scripts; public leaderboard enables community comparison unlike proprietary benchmarks
+3 more capabilities