BLIP: Boostrapping Language-Image Pre-training for Unified Vision-Language... (BLIP)Product26/100 via “vision-language generation via encoder-decoder image captioning”
* ⭐ 02/2022: [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and... (Data2vec)](https://proceedings.mlr.press/v162/baevski22a.html)
Unique: Implements a two-stage bootstrapping pipeline: the captioner module generates synthetic captions for noisy web images, then the filter module (trained as a binary classifier) removes low-quality captions, creating a self-improving dataset. This avoids manual annotation while addressing web-scale data noise — a key differentiator from supervised-only captioning models.
vs others: Achieves +2.8% improvement in CIDEr metric over prior SOTA by combining bootstrapped data cleaning with unified encoder-decoder training, outperforming separate captioning models because the filter module is trained jointly with the captioner, enabling co-adaptation rather than independent pipeline stages.