Ollama – v0.14.0
Ollama v0.14.0 is live π and Mac users, this oneβs for you!
Apple Silicon just got a whole lot smoother β OpenBLAS is now bundled with the MLX backend. No more `brew install openblas` headaches. Just install, pull your favorite model (Llama 3? Mistral?), and go β faster inference, zero config.
β¨ New in v0.14.0:
- β OpenBLAS built right into MLX β seamless setup on M-series chips
- π Speed boost for local inference (yes, really)
- π§ Cleaner dev experience for MLX-powered models
Whether youβre running fine-tuned LLMs or just tinkering, Ollama keeps getting better at making local AI feel like magic.
`ollama pull llama3` β and let the local AI party begin π€π»
