Capability
Model Explainability And Feature Importance Analysis
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “model interpretation and feature importance analysis”
High-level deep learning with built-in best practices.
Unique: Integrates multiple interpretation methods (permutation importance, SHAP, saliency maps, LRP) in a unified API that works with trained Learner objects, eliminating the need to export models to separate interpretation libraries
vs others: More integrated than SHAP or LIME because it's built into the FastAI ecosystem; more accessible than raw PyTorch gradient computation because visualization and interpretation are automatic