Ollama – v0.22.1

Ollama – v0.22.1

Ollama just dropped v0.22.1, and it’s a quick but tasty update for anyone running local LLMs! 🥔

If you haven’t tried Ollama yet, it is the ultimate toolkit for running powerful models like Llama 3, DeepSeek-R1, and Mistral directly on your own hardware without needing a cloud subscription. It handles all the heavy lifting of downloading and configuring models so you can focus on building.

Here is what’s new in this release:

  • Gemma 4 Support: The star of this update is an updated renderer specifically optimized for Gemma 4. This ensures that when you’re pulling Google’s latest lightweight powerhouse, the architecture and output are handled perfectly by the Ollama backend.

If you’ve been waiting to experiment with the newest Gemma weights, now is the time to pull that update and get tinkering! 🛠️

🔗 View Release