oneformer_ade20k_swin_largeModel41/100 via “panoptic-segmentation-stuff-things-unification”
image-segmentation model by undefined. 1,02,623 downloads.
Unique: Generates panoptic outputs by decoding both semantic and instance predictions from shared transformer features, then merging via a simple algorithm: stuff classes get single instance ID per class, thing classes retain instance IDs from instance decoder. This unified approach avoids separate post-processing pipelines.
vs others: Achieves 52.3 PQ on ADE20K, outperforming Mask2Former (51.9 PQ) and DeepLabV3+/Mask R-CNN ensembles (50.2 PQ) due to joint optimization of semantic and instance tasks. However, panoptic-specific models (e.g., Panoptic FPN) can achieve comparable PQ with simpler architectures if multi-task flexibility is not required.