Capability
Multi Task Vision Model With Shared Representation
19 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “cross-task knowledge transfer through shared representations”
Microsoft's unified model for diverse vision tasks.
Unique: Achieves knowledge transfer across 6+ vision tasks through a single unified seq2seq architecture, where shared visual encoding and decoder parameters enable cross-task learning without task-specific branches or ensemble methods
vs others: Outperforms task-specific models on low-data scenarios through knowledge transfer, though with 5-10% lower peak performance on high-data tasks compared to specialized models