Ollama – v0.14.0-rc11
🚀 Ollama v0.14.0-rc11 just dropped—and Apple Silicon users, this one’s for you! 🍏⚡
MLX now ships with OpenBLAS built-in, so inference on M-series Macs is smoother, faster, and actually plug-and-play. No more dependency hell—just `ollama run llama3` and go.
Also in this build:
- Smaller, leaner macOS packages
- Fewer “why isn’t this working?” crashes
- Stability creeping toward final release
Perfect for devs running local LLMs on Mac—quietly powerful, seriously convenient. 🛠️💻
Final v0.14 is coming… and it’s gonna be good.
