Ollama – v0.18.0-rc0

Ollama – v0.18.0-rc0

🚨 Ollama v0.18.0-rc0 is out β€” and it’s bringing some slick cloud/local hybrid improvements! πŸŒ©οΈπŸ’»

While the full release notes are still light (GitHub’s UI is being extra unhelpful right now πŸ˜…), here’s what we know (and suspect) based on the commit `9e7ba83` and recent trends:

πŸ”Ή Cloud + Local Workflow Fixes

β†’ `ollama ls` now still populates even when you run `ollama run <model:cloud>` β€” no more blank model lists!

β†’ Better sync between local tooling and cloud-hosted models.

πŸ”Ή Likely Additions & Fixes

βœ… Improved FP8 / Q4_K_M quantization support (hello, faster inference on lower-end hardware!)

βœ… Performance tweaks for Llama 3.2 & Phi-3 series

βœ… ARM64 & macOS Sonoma/Ventura compatibility polish

βœ… Potential GGUF format enhancements (more quant options? better metadata handling?)

πŸ’‘ Pro tip: Run this to grab the official changelog once it’s live:

“`bash

curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq ‘.body’

“`

Let’s get testing β€” and share your early feedback! πŸ§ͺ✨

#Ollama #LLM #AIEnthusiasts

πŸ”— View Release