indonesian-roberta-base-posp-taggerModel45/100 via “contextual subword token embedding generation for indonesian text”
token-classification model by undefined. 19,64,909 downloads.
Unique: Embeddings are derived from indonesian-roberta-base, a RoBERTa model pre-trained on Indonesian corpora, rather than generic multilingual models. This means the 768-dimensional space is optimized for Indonesian linguistic structure and vocabulary, capturing Indonesian-specific semantic relationships better than models trained primarily on English.
vs others: Produces more linguistically meaningful Indonesian embeddings than multilingual models (mBERT, XLM-R) because the encoder was pre-trained on Indonesian text, and requires no external embedding service unlike commercial APIs, enabling offline and cost-free inference.