Ollama – v0.17.1-rc2
Ollama v0.17.1‑rc2 just dropped! 🎉
What Ollama does
A lightweight local inference engine that lets you spin up LLMs on your machine (or edge device) with a single CLI command.
What’s new in this RC
- Qwen 3.5‑27B model support – run the latest 27‑billion‑parameter Qwen 3.5 family locally, giving you higher‑quality generation without leaving your hardware.
- Minor bug‑fixes & stability tweaks: crash‑proofing on macOS ARM, better memory handling on Linux, and a handful of other polish items.
Why it matters
You can now experiment with the cutting‑edge Qwen 3.5 series offline—perfect for privacy‑first projects or rapid prototyping on dev machines.
💡 Quick tip: after updating, run `ollama pull qwen3.5-27b` to cache the model locally and enjoy instant start‑up times.
