Ollama – v0.14.0

Ollama – v0.14.0

Ollama v0.14.0 is live πŸš€ and Mac users, this one’s for you!

Apple Silicon just got a whole lot smoother β€” OpenBLAS is now bundled with the MLX backend. No more `brew install openblas` headaches. Just install, pull your favorite model (Llama 3? Mistral?), and go β€” faster inference, zero config.

✨ New in v0.14.0:

  • βœ… OpenBLAS built right into MLX β€” seamless setup on M-series chips
  • πŸš€ Speed boost for local inference (yes, really)
  • πŸ”§ Cleaner dev experience for MLX-powered models

Whether you’re running fine-tuned LLMs or just tinkering, Ollama keeps getting better at making local AI feel like magic.

`ollama pull llama3` β€” and let the local AI party begin πŸ€–πŸ’»

πŸ”— View Release