Ollama – v0.14.2-rc0

Ollama – v0.14.2-rc0

Hey AI tinkerers! πŸš€ Ollama v0.14.2-rc0 just landed β€” and Mac users with Apple Silicon are in for a treat! 🍏

βœ… MLX build instructions added to the README β€” now you can compile Ollama natively on M1/M2/M3 chips, bypassing Docker and getting faster, leaner local LLM inference.

MLX = Apple’s new ML framework (think PyTorch, but built for M-series). No GPU? Still rockin’ Llama 3, DeepSeek-R1, or Mistral β€” just smoother and snappier.

⚠️ Still a release candidate, so keep an eye out for final tweaks β€” but if you’re tinkering on Mac? This is your golden ticket. 🎯

Linux & Windows folks β€” your Ollama magic stays untouched, no worries! πŸ’»πŸ› οΈ

πŸ”— View Release