Ollama – v0.13.1-rc1: model: ministral w/ llama4 scaling (#13292)

Ollama – v0.13.1-rc1: model: ministral w/ llama4 scaling (#13292)

🚀 Ollama v0.13.1-rc1 just dropped — and `ministral` is now a powerhouse!

Llama 4-style RoPE scaling — Ministral’s context handling just got a turbo upgrade. Longer prompts? Smoother reasoning. No more stuttering at 8K+ tokens.

🧠 New parser for reasoning & tool calls — Say goodbye to messy JSON parsing. Ministral now reliably outputs structured reasoning steps and function calls — perfect for agents, RAG pipelines, or automation workflows.

🔧 Fixed Rope scaling in converter — Under-the-hood fixes keep your models stable when scaling context windows. No more weird token drift.

This isn’t just a patch — it’s the quiet revolution local LLMs have been waiting for. If you’re building agents or need clean tool calling, ministral just moved to the top of your list.

Grab it: `ollama pull ministral` and watch your agents think smarter. 🛠️

🔗 View Release