Ollama – v0.15.0

Ollama – v0.15.0

๐Ÿš€ Ollama v0.15.0 is live โ€” and itโ€™s all about stability!

CUDA MMA errors on NVIDIA GPUs? Gone. ๐Ÿž๐Ÿ’ฅ

This update crushes those pesky GPU crashes during Llama model inference, making local runs smoother than ever โ€” especially for Linux users with NVIDIA cards.

No flashy new featuresโ€ฆ just solid under-the-hood fixes.

Perfect if youโ€™re running Ollama in production or pushing models hard on local hardware.

๐Ÿ’ก Pro tip: Update + reboot your Ollama service on Linux for the full benefit.

GGUF, Llama 3, Mistral โ€” all running cleaner now.

#Ollama #LocalLLMs #CUDA #GPUComputing

๐Ÿ”— View Release