vit-large-patch16-384Model41/100 via “imagenet-21k pre-trained image classification with vision transformer architecture”
image-classification model by undefined. 4,74,363 downloads.
Unique: Uses pure transformer architecture (no convolutional layers) with patch-based tokenization and ImageNet-21k pre-training (14M images, 14k classes) rather than ImageNet-1k only, enabling stronger transfer learning to downstream tasks. Implements efficient multi-head self-attention (16 heads) with linear complexity relative to sequence length through standard transformer design, avoiding the quadratic memory overhead of dense attention in large images.
vs others: Outperforms ResNet-152 and EfficientNet-B7 on ImageNet-1k accuracy (90.88% vs 82-84%) while maintaining comparable inference speed on modern GPUs; stronger transfer learning than CNN-based models due to global receptive field from first layer, but requires larger batch sizes and more training data for fine-tuning on small datasets