Ollama – v0.23.0

Ollama – v0.23.0

Ollama v0.23.0 is officially live! 🚀

If you aren’t running Ollama yet, you are missing out on the gold standard for local LLM orchestration. It’s the ultimate toolkit for pulling and running heavy hitters like Llama 3, DeepSeek-R1, and Mistral directly on your hardware—no cloud subscriptions or API keys needed.

The team is moving at lightning speed, and this latest update brings some great refinements to your local workflow:

  • Claude-style Integration: This release introduces significant backend work to support Claude-style application structures, making it even easier to integrate sophisticated prompting patterns into your local setups.
  • Enhanced Stability: A major focus of this version is refining the launch processes for new model types, ensuring that when you pull a fresh architecture, it runs smoothly without a hitch.

Whether you’re building a private RAG pipeline or just experimenting with the latest open-source weights, this update keeps your local inference engine rock solid. 🛠️

🔗 View Release