Ollama – v0.15.0
๐ Ollama v0.15.0 is live โ and itโs all about stability!
CUDA MMA errors on NVIDIA GPUs? Gone. ๐๐ฅ
This update crushes those pesky GPU crashes during Llama model inference, making local runs smoother than ever โ especially for Linux users with NVIDIA cards.
No flashy new featuresโฆ just solid under-the-hood fixes.
Perfect if youโre running Ollama in production or pushing models hard on local hardware.
๐ก Pro tip: Update + reboot your Ollama service on Linux for the full benefit.
GGUF, Llama 3, Mistral โ all running cleaner now.
#Ollama #LocalLLMs #CUDA #GPUComputing
