Ollama – v0.17.0
Ollama v0.17.0 – fresh on the scene! 🎉
What Ollama does:
Run open‑source LLMs (Llama 3, Gemma, Mistral, etc.) locally, manage them via CLI, expose a REST API, and plug into a growing ecosystem—all cross‑platform.
—
New in v0.17.0
- Web‑search plugin auto‑install
- `ollama config` now drops the web‑search extension straight into your user extensions folder. Query the internet from a local model with zero manual steps.
- Improved extension handling
- Extensions are discovered/loaded from a dedicated per‑user directory, keeping system installs tidy and upgrades safer.
- Stability & speed tweaks
- Crash loops on certain models squashed.
- Faster startup for `ollama serve` on macOS/Linux.
- Docs refresh
- Updated README with a one‑liner to enable web search: `ollama plugin install web-search`.
—
Why you’ll love it
- Real‑time data in your local LLM responses—no cloud lock‑in.
- Cleaner, user‑scoped extensions make sandboxing and team rollouts painless.
Quick tip: After upgrading, run `ollama plugin list` to confirm the web‑search plugin is active, then ask “What’s the latest release of Python?” and watch Ollama pull live info! 🚀
