Ollama – v0.16.0-rc1

Ollama – v0.16.0-rc1

πŸš€ Ollama v0.16.0-rc1 is here!

The latest release candidate just dropped β€” and it’s packed with a critical fix for Apple Silicon users. Here’s what’s new:

πŸ”Ή Bug Fix: Non-MLX model loading restored

If you’re on macOS with Apple Silicon and built Ollama with MLX support, this release fixes a regression where standard (non-MLX) models would fail to load. πŸ› οΈ

β†’ Now you can seamlessly mix MLX-optimized and standard GGUF models β€” no more swapping builds!

πŸ’‘ Why it matters:

This improves flexibility and stability for developers experimenting with different model formats on M1/M2/M3 Macs β€” especially important as GGUF adoption grows.

πŸ“Œ Note: This is a release candidate (v0.16.0-rc1), so expect final docs and changelog soon β€” but it’s stable enough for testing!

πŸ‘‰ Grab it on GitHub: github.com/ollama/ollama/releases

Let us know how it runs! πŸ§ͺπŸ’»

πŸ”— View Release