Ollama – v0.13.1-rc1: model: ministral w/ llama4 scaling (#13292)
๐ Ollama v0.13.1-rc1 just dropped โ and `ministral` is now a powerhouse!
โจ Llama 4-style RoPE scaling โ Ministralโs context handling just got a turbo upgrade. Longer prompts? Smoother reasoning. No more stuttering at 8K+ tokens.
๐ง New parser for reasoning & tool calls โ Say goodbye to messy JSON parsing. Ministral now reliably outputs structured reasoning steps and function calls โ perfect for agents, RAG pipelines, or automation workflows.
๐ง Fixed Rope scaling in converter โ Under-the-hood fixes keep your models stable when scaling context windows. No more weird token drift.
This isnโt just a patch โ itโs the quiet revolution local LLMs have been waiting for. If youโre building agents or need clean tool calling, ministral just moved to the top of your list.
Grab it: `ollama pull ministral` and watch your agents think smarter. ๐ ๏ธ
