convnext_femto.d1_in1kModel39/100 via “fine-tuning on custom image classification datasets with transfer learning”
image-classification model by undefined. 4,98,269 downloads.
Unique: ConvNeXt's modern design (LayerNorm, GELU, depthwise convolutions) makes it more stable for fine-tuning than ResNet because normalization is less dependent on batch statistics, reducing the need for careful batch size selection. The Femto variant's small size means fine-tuning is fast (hours on single GPU vs. days for larger models), enabling rapid experimentation and iteration.
vs others: Requires fewer labeled examples than ViT-Tiny for equivalent downstream accuracy due to CNN inductive bias; fine-tunes faster than larger ConvNeXt variants (Base, Small) while maintaining competitive accuracy; more stable than MobileNetV3 fine-tuning due to modern normalization techniques.