Ollama – v0.15.6
Ollama v0.15.6 🚀
Run LLMs locally with ease—now with a heads‑up on memory.
What’s new?
- Docs bump: The release notes now warn that parallel mode (`ollama run –parallel`) needs more RAM. If you’re scaling inference across multiple threads or GPUs, allocate extra memory to keep things smooth.
That’s the only change in this tag—no new features or bug fixes.
Pro tip:
- Double‑check your container/VM RAM settings before launching parallel jobs. A quick memory bump can save you from unexpected OOM crashes.
Happy tinkering! 🎉
