Ollama – v0.18.0-rc0
π¨ Ollama v0.18.0-rc0 is out β and itβs bringing some slick cloud/local hybrid improvements! π©οΈπ»
While the full release notes are still light (GitHubβs UI is being extra unhelpful right now π ), hereβs what we know (and suspect) based on the commit `9e7ba83` and recent trends:
πΉ Cloud + Local Workflow Fixes
β `ollama ls` now still populates even when you run `ollama run <model:cloud>` β no more blank model lists!
β Better sync between local tooling and cloud-hosted models.
πΉ Likely Additions & Fixes
β Improved FP8 / Q4_K_M quantization support (hello, faster inference on lower-end hardware!)
β Performance tweaks for Llama 3.2 & Phi-3 series
β ARM64 & macOS Sonoma/Ventura compatibility polish
β Potential GGUF format enhancements (more quant options? better metadata handling?)
π‘ Pro tip: Run this to grab the official changelog once itβs live:
“`bash
curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq ‘.body’
“`
Letβs get testing β and share your early feedback! π§ͺβ¨
#Ollama #LLM #AIEnthusiasts
