Ollama – v0.20.4-rc1: gemma4: add missing file (#15394)
Ollama v0.20.4-rc1 is here! π
If you’ve been trying to run Gemma 4 locally and hitting unexpected errors, this release candidate is exactly what you need to get back up and running smoothly. Ollama remains the premier tool for democratizing LLM access, allowing you to spin up models like Llama 3, DeepSeek-R1, and Mistral directly on your hardware without relying on the cloud.
Whatβs new in this release:
- Full Gemma 4 Support: This update resolves a critical issue by adding a missing file required for seamless Gemma 4 integration.
- Essential Bug Fix: The patch corrects an accidental omission from a previous pull request (#15378), ensuring that the model files are correctly recognized by the framework.
This is a quick but vital fix to ensure your local AI environment stays stable and capable of running the latest model architectures. Grab it and start tinkering! π οΈ
