• Ollama – v0.17.7-rc1

    Ollama – v0.17.7-rc1

    ๐Ÿšจ Ollama v0.17.7-rc1 is out! ๐Ÿšจ

    This release is a tiny but tidy patch candidate โ€” only one commit landed:

    ๐Ÿ”ง `cmd/config: fix cloud model limit lookups in integrations (#14650)`

    โœ… Whatโ€™s fixed:

    • Resolves a bug where Ollama was misfetching or misapplying model usage limits when integrated with cloud services (e.g., Ollama Cloud or third-party APIs).
    • Ensures smoother, more accurate rate-limit handling in hybrid/localโ€“cloud workflows.

    ๐Ÿ“Œ Why it matters:

    • If youโ€™re using Ollama with cloud backends or integrations (like LangChain, LlamaIndex, or custom tooling), this fix helps avoid unexpected throttling or config mismatches.
    • No new features, no breaking changes โ€” just more reliability ๐Ÿ› ๏ธ

    ๐Ÿ“… Tagged: Mar 5, 2024

    ๐Ÿ”— Release on GitHub

    โš ๏ธ RC = Release Candidate โ€” test it out, but maybe wait for the stable drop before pushing to prod.

    Let me know if you want a deep dive into PR #14650 or how this affects your integrations! ๐Ÿค–โœจ

    ๐Ÿ”— View Release

  • Ollama – v0.17.7-rc0

    Ollama – v0.17.7-rc0

    ๐Ÿšจ Ollama v0.17.7-rc0 is here โ€” and itโ€™s all about Qwen3.5 love! ๐Ÿง โœจ

    The latest release candidate is a focused update with one standout improvement:

    ๐Ÿ”น Context length configuration for Qwen3.5 models at launch โ€” now you can tweak how much context the model uses right from the start, boosting compatibility and flexibility for longer prompts or multi-turn conversations.

    No flashy new features this time โ€” just smart, targeted tuning to make Qwen3.5 models run smoother and more predictably on your machine ๐Ÿ› ๏ธ๐Ÿ’ป

    Perfect for anyone experimenting with Qwen3.5 locally or building apps around it!

    Curious how it behaves? Drop a test prompt and share your results ๐Ÿ‘‡

    ๐Ÿ”— View Release

  • Ollama – v0.17.6

    Ollama – v0.17.6

    ๐Ÿšจ Ollama v0.17.6 is out โ€” and itโ€™s a quick but important patch! ๐Ÿšจ

    This release is light on features, heavy on precision:

    ๐Ÿ”ง Bug fix: Corrected how `glm-ocr` image tags are parsed in renderer prompts

    ๐Ÿ”— PR #14584 by @Victor-Quqi

    โœ… Why it matters:

    • If you’re using GLM-OCR (especially for multimodal OCR tasks), image tags like `<image>` in your prompts will now render correctly instead of causing errors or misinterpretations.
    • Ensures smoother integration in custom renderer workflows โ€” critical for anyone building multimodal apps or pipelines on top of Ollama.

    ๐Ÿ“ฆ No new models, no API changes โ€” just a clean, targeted fix to keep your local LLM workflows humming.

    If you rely on GLM-OCR or custom multimodal prompts, update away! ๐Ÿ› ๏ธ

    Let me know if you want a breakdown of how Ollama renderers work or how to test this fix! ๐Ÿค–โœจ

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v1.0.0

    Voxtral Wyoming – v1.0.0

    ๐Ÿšจ Voxtral Wyoming v1.0.0 is live โ€” and itโ€™s production-ready! ๐Ÿš€

    The wait is over: this release marks the stable, final v1.0.0 of Voxtral Wyoming โ€” your go-to offline STT service powered by Mistralโ€™s Voxtral models, now fully integrated with Home Assistant Assist via the Wyoming protocol.

    โœจ Whatโ€™s new (and why it matters):

    โœ… Stable & battle-tested โ€” all major bugs squashed, performance optimized for real-world use

    โœ… API finalized โ€” no more breaking changes ahead; integrations are safe to lock in

    โœ… Full tooling in place โ€” docs, tests, and CI/CD pipelines are now rock-solid

    โœ… Zero flash, all function โ€” no flashy new features, just a polished, reliable upgrade ready for production ๐Ÿ› ๏ธ

    ๐ŸŽฏ Whether youโ€™re running it on CPU, CUDA (NVIDIA), or MPS (Apple Silicon), and whether your audio comes in MP3, OGG, FLAC, or WAV โ€” Voxtral Wyoming handles it all with automatic PCM16 conversion. Config via env vars? Yep โ€” host, port, language, model IDโ€ฆ all covered.

    ๐Ÿ“ฆ Dockerized. Deployed. Ready.

    ๐ŸŸข Green light for production! Letโ€™s build smarter, offline-first voice assistants โ€” together. ๐ŸŽค๐Ÿ’ก

    ๐Ÿ”— View Release

  • Ollama – v0.17.5

    Ollama – v0.17.5

    ๐Ÿšจ Ollama v0.17.5 is live! ๐Ÿšจ

    Hey AI tinkerers โ€” fresh update alert! ๐Ÿ”ฅ Ollama just rolled out v0.17.5, and itโ€™s a quiet but mighty one โ€” especially if you love playing with Qwen3 or importing GGUF models. Hereโ€™s the lowdown:

    ๐Ÿ”น GGUF love, expanded! ๐ŸŽ

    • Full support for importing and running Qwen3 models (like `Qwen3-0.6B`, `Qwen3-1.7B`) โ€” straight from Hugging Face or wherever you grab your GGUFs.
    • Smoother imports, fewer hiccups ๐Ÿ› ๏ธ

    ๐Ÿ”น Under-the-hood polish

    • Bug fixes and stability tweaks (you wonโ€™t see them, but youโ€™ll feel the smoother run).

    ๐Ÿ’ก Why care?

    If youโ€™re experimenting with lightweight Qwen3 variants or love the flexibility of GGUF (quantized, portable, efficient ๐Ÿ“ฆ), this update makes your workflow just a little more magical. โœจ

    Ready to upgrade? `ollama pull ollama` ๐Ÿš€

    Let us know how it runs!

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.5.0

    Voxtral Wyoming – v0.5.0

    _New update detected._

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.4.0

    Voxtral Wyoming – v0.4.0

    _New update detected._

    ๐Ÿ”— View Release

  • Lemonade – v9.4.1

    Lemonade – v9.4.1

    _New update detected._

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.3.0

    Voxtral Wyoming – v0.3.0

    _New update detected._

    ๐Ÿ”— View Release

  • Ollama – v0.17.4

    Ollama – v0.17.4

    ๐Ÿš€ Ollama v0.17.4 is live! Hereโ€™s whatโ€™s fresh in this patch release:

    ๐Ÿ”น Stable Tool Calling for GLM-4 & Qwen3

    โœ… Reliable tool/function calling supportโ€”no more misaligned or garbled tool outputs!

    โœ… Works seamlessly with `curl`, Python clients, and custom tools via the Ollama API.

    ๐Ÿ”น Better JSON & Parser Handling

    ๐Ÿง  Internal upgrades to model parsersโ€”especially for Chinese-language models (GLM, Qwen).

    ๐Ÿ“Š More consistent parsing of JSON-formatted tool responses.

    ๐Ÿ”น Minor Fixes & Tweaks

    โš™๏ธ Performance bumps, bug fixes, and general polishโ€”zero breaking changes.

    Perfect for anyone relying on structured outputs or tool integrations with local LLMs. Try it out and let us know how your tool-calling workflows feel! ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release