Ollama – v0.14.3-rc2

Ollama – v0.14.3-rc2

🚀 Ollama v0.14.3-rc2 just dropped — and it’s a quiet hero for your RAM!

💥 Bug squashed: Image models (SDXL, DALL·E, etc.) no longer get loaded into memory during model deletion. They now stay out of your way until you actually call them.

🧠 Why it rocks:

  • Less RAM bloat = faster model swaps
  • Smoother performance on laptops & tiny servers
  • Cleaner shutdowns + smarter cleanup of unused vision models

Perfect if you’re juggling multimodal AI or running vision models in prod. Still a release candidate, but solid — keep those GPUs cool and your memory free! 🖥️✨

🔗 View Release