Capability
Long Context Text Generation With 200k Token Window
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “long-context text generation with 128k token window”
Largest open-weight model at 405B parameters.
Unique: 405B parameter scale with 128K context window represents the largest open-weight model released; achieves this through transformer architecture trained on 15+ trillion tokens, enabling document-length reasoning without context truncation that smaller models require
vs others: Larger context window than most open-source alternatives (Mistral, Llama 2) and competitive with GPT-4o's 128K window while remaining fully open-weight and deployable on-premises