Ollama – v0.16.0-rc1
π Ollama v0.16.0-rc1 is here!
The latest release candidate just dropped β and itβs packed with a critical fix for Apple Silicon users. Hereβs whatβs new:
πΉ Bug Fix: Non-MLX model loading restored
If youβre on macOS with Apple Silicon and built Ollama with MLX support, this release fixes a regression where standard (non-MLX) models would fail to load. π οΈ
β Now you can seamlessly mix MLX-optimized and standard GGUF models β no more swapping builds!
π‘ Why it matters:
This improves flexibility and stability for developers experimenting with different model formats on M1/M2/M3 Macs β especially important as GGUF adoption grows.
π Note: This is a release candidate (v0.16.0-rc1), so expect final docs and changelog soon β but itβs stable enough for testing!
π Grab it on GitHub: github.com/ollama/ollama/releases
Let us know how it runs! π§ͺπ»
