mobilenetv3_small_100.lamb_in1kModel51/100 via “transfer-learning-backbone-extraction”
image-classification model by undefined. 1,74,99,725 downloads.
Unique: MobileNetV3-Small's inverted residual architecture with SE modules creates a feature pyramid with strong semantic information at shallow depths, enabling effective transfer learning with minimal fine-tuning. The model's depthwise-separable convolutions reduce parameter count in the backbone, leaving capacity for task-specific heads. timm's model registry provides automatic layer naming and access patterns (e.g., model.features[i] for block i, model.global_pool for pooling layer).
vs others: Requires 10-20× fewer parameters to fine-tune than ResNet-50 backbones while maintaining competitive transfer learning accuracy; enables faster adaptation on edge devices and lower memory footprint during training.