Visual GenomeDataset45/100 via “multimodal-dataset-integration-for-vision-language-models”
108K images with dense scene graphs and 5.4M region descriptions.
Unique: Provides unified integration of 5 complementary annotation types (scene graphs, region descriptions, object instances, attributes, QA pairs) across 108K images, enabling multi-task learning from diverse supervision signals. Dataset structure supports joint optimization for detection, grounding, reasoning, and attribute prediction in a single training pipeline.
vs others: More comprehensive than single-task datasets (COCO, Flickr30K) and enables multi-task learning unlike datasets with isolated annotation types; supports training unified models that leverage complementary supervision signals