Capability
Encoder Decoder Code Generation With Instruction Tuning
9 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “instruction-following code generation with fine-tuned response formatting”
DeepSeek's 236B MoE model specialized for code.
Unique: Instruction-tuned variants (Instruct models) are fine-tuned on instruction-response pairs to follow user specifications precisely, while maintaining the sparse MoE architecture and 128K context of base models
vs others: Provides instruction-following capabilities comparable to GPT-4-Turbo while remaining open-source and deployable locally, with explicit control over fine-tuning data vs proprietary models