twitter-xlm-roberta-base-sentimentModel46/100 via “multilingual-sentiment-classification-with-xlm-roberta”
text-classification model by undefined. 11,59,018 downloads.
Unique: Specifically fine-tuned on Twitter/social media text using XLM-RoBERTa-base (not generic RoBERTa), enabling superior performance on informal, code-switched, and emoji-rich content across 100+ languages. Achieves this through domain-specific pretraining on 198M tweets rather than generic web text, combined with cross-lingual token sharing that enables zero-shot transfer to unseen languages.
vs others: Outperforms generic multilingual models (mBERT, mT5) on social media sentiment due to Twitter-specific fine-tuning, and requires no language-specific model swapping unlike language-specific alternatives (BERT-base-multilingual-cased), making it ideal for production systems handling diverse linguistic input.