Ollama – v0.16.0-rc0
🚨 Ollama v0.16.0-rc0 is out — and it’s packing a sneaky but super important fix! 🚨
🔥 What’s new?
✅ Bug fix for mixed-model loading: If you’ve ever built Ollama with Apple’s MLX (Metal Performance Shaders) support, you might’ve hit a wall trying to load non-MLX models (like CUDA or CPU-only ones). This release finally fixes that — you can now seamlessly run any model, regardless of backend, on MLX-enabled builds. 🧩💻
🔍 Why it matters:
- More flexibility for Apple Silicon users (M1/M2/M3) who want to experiment across model types.
- Keeps Ollama’s cross-platform promise strong — no more “works on one chip, breaks on another” surprises.
📦 Still missing?
Full release notes aren’t live yet on the page you checked (just UI noise 😅), but head over to the official RC:
💡 Pro tip: This is a release candidate — great for testing, but maybe hold off on production upgrades until the stable drop.
Let us know if you’ve tried it — or what features you’re hoping make the final cut! 🧠✨
