Capability
Model Interpretability Through Attention Visualization
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “attention visualization and interpretability analysis”
fill-mask model by undefined. 6,06,75,227 downloads.
Unique: Native support for attention output via output_attentions=True flag enables direct access to 144 attention matrices (12 layers × 12 heads) without custom extraction code; integrates with BertViz for interactive visualization
vs others: More granular than black-box explanation methods (LIME, SHAP) because it provides direct access to model internals, though less actionable than gradient-based attribution methods for understanding prediction importance