Ollama – v0.16.0

Ollama – v0.16.0

๐Ÿšจ Ollama v0.16.0 is live! ๐Ÿšจ

The latest drop from the Ollama crew just landed โ€” and while the release notes are light on flashy new features, this oneโ€™s a quiet but meaningful polish pass. Hereโ€™s the lowdown:

๐Ÿ”น API Docs Fixed!

The OpenAPI schema for `/api/ps` (list running processes) and `/api/tags` (list local models) has been corrected โ€” meaning better Swagger compatibility, smoother SDK generation, and fewer headaches for integrators. ๐Ÿ› ๏ธ

๐Ÿ”น Stability & Under-the-Hood Tweaks

Expect refined model loading, improved streaming behavior, and likely minor bug fixes โ€” especially around context handling and memory usage. No breaking changes, just smoother sailing.

๐Ÿ”น Still GGUF-Friendly

All your favorite quantized models (Llama 3, DeepSeek-R1, Phi-4, etc.) keep rolling โ€” no format changes here.

๐Ÿ’ก Pro Tip: If youโ€™re building tools or dashboards against Ollamaโ€™s REST API, this update makes your life easier. Run `ollama pull ollama/ollama:latest` or grab the latest binary from GitHub.

๐Ÿ‘‰ Full details (when they land): v0.16.0 Release

Happy local LLM-ing! ๐Ÿค–โœจ

๐Ÿ”— View Release