Ollama – v0.14.0-rc4
π Ollama v0.14.0-rc4 just dropped β and itβs fixing the annoying MLX build hiccups on macOS & Docker! πΌοΈπ»
If youβve been trying to run LLaVA or other vision models on Apple Silicon and kept hitting “MLX not found” errors? Say goodbye to the frustration. This patch nails the build scripts so MLX works reliably β no more wrestling with toolchains.
β Whatβs fixed:
- MLX build scripts now work smoothly on macOS (M-series chips, rejoice!)
- Dockerfile updated to bundle MLX deps properly for image gen in containers
No flashy new features β just stable, reliable local image generation. Perfect for devs prepping for v0.14βs full launch. Keep those M-chips humming and start generating again! π
