Ollama – v0.30.0-rc13

Ollama – v0.30.0-rc13

Ollama v0.30.0-rc13 🛠️

If you’re running local LLMs, you know Ollama is the go-to for getting models like Llama 3 and DeepSeek-R1 up and running with zero friction. This latest release candidate is a focused update aimed at keeping your local inference engine sharp!

What’s new:

  • llama.cpp Update: The big news here is an underlying update to the `llama.cpp` backend. Since Ollama relies on this for all the heavy lifting, these updates are huge for performance tweaks, improved memory management, and better support for the latest quantization methods. 🚀

Keep an eye on this one as it rolls out—backend refinements like this are exactly what we need to keep those local chats feeling snappy and efficient!

🔗 View Release