via “multilingual sentence embedding generation with contrastive learning”
sentence-similarity model by undefined. 16,64,239 downloads.
Unique: Uses a two-stage training approach combining masked language modeling with contrastive learning on 1B+ weakly-supervised sentence pairs (mined from web data), achieving SOTA MTEB benchmark performance while maintaining a compact 110M parameter footprint suitable for on-premise deployment. Implements in-batch negatives with hard negative mining rather than external memory banks, reducing training complexity while maintaining representation quality.
vs others: Outperforms OpenAI's text-embedding-3-small on MTEB semantic search tasks while being 10x smaller, fully open-source, and deployable without API calls or rate limits, making it ideal for privacy-sensitive or high-volume applications.