rorshark-vit-baseModel40/100 via “multi-head self-attention over image patches with 12-layer transformer encoder”
image-classification model by undefined. 6,20,550 downloads.
Unique: Uses 12 parallel attention heads with 64-dimensional subspaces per head (total 768 dimensions), enabling the model to simultaneously learn multiple types of spatial relationships (e.g., one head attends to object boundaries, another to texture patterns). Each head operates independently, allowing diverse attention patterns without architectural constraints.
vs others: More interpretable than CNN feature maps because attention weights directly show which patches influence predictions, whereas CNN receptive fields are implicit and difficult to visualize. Enables global context modeling in early layers (unlike CNNs which build receptive fields gradually), improving performance on tasks requiring scene-level understanding.