Ollama – v0.15.5-rc5

Ollama – v0.15.5-rc5

Ollama v0.15.5‑rc5 – fresh off the press! 🚀

What’s the buzz?

A lightweight framework for running LLMs (Llama 3, Gemma, Mistral, etc.) locally—now even smoother to spin up.

New goodies in this release

  • Launch command overhaul

`ollama launch` is faster, logs cleaner, and handles missing model files gracefully.

  • Sharper error messages

When a model can’t be found or the GPU runtime fails, you’ll get actionable hints instead of cryptic dumps.

  • Cross‑platform tweaks

Minor fixes for macOS ARM builds & Linux containers—no more “permission denied” crashes during startup.

  • Telemetry opt‑out flag

Add `–no-telemetry` to suppress anonymous usage reporting. Perfect for privacy‑first setups or CI pipelines.

  • Dependency bump

Updated protobuf & ggml libraries shave ~5 % off memory overhead for large models.

  • CLI consistency fixes

`ollama run`, `ollama serve`, and `ollama pull` now share the same flag syntax (`–model`, `–port`, etc.), making scripting a breeze.

> Tip: When automating model serving, sprinkle in `–no-telemetry` to keep your logs tidy and respect privacy.

That’s it—speedier launches, clearer feedback, and a handful of quality‑of‑life tweaks for all you local‑LLM tinkers. Happy experimenting! 🎉

🔗 View Release