Ollama – v0.14.2-rc0
Hey AI tinkerers! π Ollama v0.14.2-rc0 just landed β and Mac users with Apple Silicon are in for a treat! π
β MLX build instructions added to the README β now you can compile Ollama natively on M1/M2/M3 chips, bypassing Docker and getting faster, leaner local LLM inference.
MLX = Appleβs new ML framework (think PyTorch, but built for M-series). No GPU? Still rockinβ Llama 3, DeepSeek-R1, or Mistral β just smoother and snappier.
β οΈ Still a release candidate, so keep an eye out for final tweaks β but if youβre tinkering on Mac? This is your golden ticket. π―
Linux & Windows folks β your Ollama magic stays untouched, no worries! π»π οΈ
