Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

Ollama v0.14.0-rc0 just landed โ€” and Apple Silicon fans, this oneโ€™s for you ๐Ÿ๐Ÿ’ฅ

Say hello to experimental MLX backend support โ€” run LLMs natively on M-series chips without CUDA or PyTorch overhead. Faster, leaner, and totally Apple-native.

โœจ Whatโ€™s new?

  • ๐Ÿ–ผ๏ธ Image generation โ€” yes, you can now generate images directly via Ollama (early but wildly cool)
  • ๐Ÿ› ๏ธ Built-in build toggles: `cmake –preset MLX` and `go build -tags mlx .` for easy custom compiles
  • ๐ŸŽ Full macOS support โ€” x86 & ARM builds ready, CPU-only for now (GPU accel coming soon!)
  • ๐Ÿ“š Cleaner docs + improved tokenizer guides โ€” because nobody likes cryptic configs

This is still a release candidate, so expect bugsโ€ฆ but if youโ€™re on Mac and want to skip the bloat? Nowโ€™s your chance. Break it, tweak it, report it โ€” weโ€™re all in this together ๐Ÿš€

#MLX #AppleSilicon #ImageGen #Ollama #AIOnMac

๐Ÿ”— View Release