Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)
Ollama v0.14.0-rc0 just landed โ and Apple Silicon fans, this oneโs for you ๐๐ฅ
Say hello to experimental MLX backend support โ run LLMs natively on M-series chips without CUDA or PyTorch overhead. Faster, leaner, and totally Apple-native.
โจ Whatโs new?
- ๐ผ๏ธ Image generation โ yes, you can now generate images directly via Ollama (early but wildly cool)
- ๐ ๏ธ Built-in build toggles: `cmake –preset MLX` and `go build -tags mlx .` for easy custom compiles
- ๐ Full macOS support โ x86 & ARM builds ready, CPU-only for now (GPU accel coming soon!)
- ๐ Cleaner docs + improved tokenizer guides โ because nobody likes cryptic configs
This is still a release candidate, so expect bugsโฆ but if youโre on Mac and want to skip the bloat? Nowโs your chance. Break it, tweak it, report it โ weโre all in this together ๐
#MLX #AppleSilicon #ImageGen #Ollama #AIOnMac
