Capability
Efficient Sparse Inference With Mixture Of Experts
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “efficient sparse inference with selective expert activation”
Snowflake's 480B MoE model for enterprise data tasks.
Unique: Hybrid dense-MoE architecture (10B dense + 128 experts, 17B active per token) enabling selective expert activation that reduces inference cost compared to dense models while maintaining enterprise task optimization that generic sparse models lack
vs others: More efficient than dense 70B+ models due to sparse activation (17B vs. 70B active parameters), while more specialized than general-purpose MoE models like Mixtral that lack enterprise SQL/code optimization