• Voxtral Wyoming – v1.0.0

    Voxtral Wyoming – v1.0.0

    ๐Ÿšจ Voxtral Wyoming v1.0.0 is live โ€” and itโ€™s production-ready! ๐Ÿš€

    The wait is over: this release marks the stable, final v1.0.0 of Voxtral Wyoming โ€” your go-to offline STT service powered by Mistralโ€™s Voxtral models, now fully integrated with Home Assistant Assist via the Wyoming protocol.

    โœจ Whatโ€™s new (and why it matters):

    โœ… Stable & battle-tested โ€” all major bugs squashed, performance optimized for real-world use

    โœ… API finalized โ€” no more breaking changes ahead; integrations are safe to lock in

    โœ… Full tooling in place โ€” docs, tests, and CI/CD pipelines are now rock-solid

    โœ… Zero flash, all function โ€” no flashy new features, just a polished, reliable upgrade ready for production ๐Ÿ› ๏ธ

    ๐ŸŽฏ Whether youโ€™re running it on CPU, CUDA (NVIDIA), or MPS (Apple Silicon), and whether your audio comes in MP3, OGG, FLAC, or WAV โ€” Voxtral Wyoming handles it all with automatic PCM16 conversion. Config via env vars? Yep โ€” host, port, language, model IDโ€ฆ all covered.

    ๐Ÿ“ฆ Dockerized. Deployed. Ready.

    ๐ŸŸข Green light for production! Letโ€™s build smarter, offline-first voice assistants โ€” together. ๐ŸŽค๐Ÿ’ก

    ๐Ÿ”— View Release

  • Ollama – v0.17.5

    Ollama – v0.17.5

    ๐Ÿšจ Ollama v0.17.5 is live! ๐Ÿšจ

    Hey AI tinkerers โ€” fresh update alert! ๐Ÿ”ฅ Ollama just rolled out v0.17.5, and itโ€™s a quiet but mighty one โ€” especially if you love playing with Qwen3 or importing GGUF models. Hereโ€™s the lowdown:

    ๐Ÿ”น GGUF love, expanded! ๐ŸŽ

    • Full support for importing and running Qwen3 models (like `Qwen3-0.6B`, `Qwen3-1.7B`) โ€” straight from Hugging Face or wherever you grab your GGUFs.
    • Smoother imports, fewer hiccups ๐Ÿ› ๏ธ

    ๐Ÿ”น Under-the-hood polish

    • Bug fixes and stability tweaks (you wonโ€™t see them, but youโ€™ll feel the smoother run).

    ๐Ÿ’ก Why care?

    If youโ€™re experimenting with lightweight Qwen3 variants or love the flexibility of GGUF (quantized, portable, efficient ๐Ÿ“ฆ), this update makes your workflow just a little more magical. โœจ

    Ready to upgrade? `ollama pull ollama` ๐Ÿš€

    Let us know how it runs!

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.5.0

    Voxtral Wyoming – v0.5.0

    _New update detected._

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.4.0

    Voxtral Wyoming – v0.4.0

    _New update detected._

    ๐Ÿ”— View Release

  • Lemonade – v9.4.1

    Lemonade – v9.4.1

    _New update detected._

    ๐Ÿ”— View Release

  • Voxtral Wyoming – v0.3.0

    Voxtral Wyoming – v0.3.0

    _New update detected._

    ๐Ÿ”— View Release

  • Ollama – v0.17.4

    Ollama – v0.17.4

    ๐Ÿš€ Ollama v0.17.4 is live! Hereโ€™s whatโ€™s fresh in this patch release:

    ๐Ÿ”น Stable Tool Calling for GLM-4 & Qwen3

    โœ… Reliable tool/function calling supportโ€”no more misaligned or garbled tool outputs!

    โœ… Works seamlessly with `curl`, Python clients, and custom tools via the Ollama API.

    ๐Ÿ”น Better JSON & Parser Handling

    ๐Ÿง  Internal upgrades to model parsersโ€”especially for Chinese-language models (GLM, Qwen).

    ๐Ÿ“Š More consistent parsing of JSON-formatted tool responses.

    ๐Ÿ”น Minor Fixes & Tweaks

    โš™๏ธ Performance bumps, bug fixes, and general polishโ€”zero breaking changes.

    Perfect for anyone relying on structured outputs or tool integrations with local LLMs. Try it out and let us know how your tool-calling workflows feel! ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.17.3: model: fix qwen3 tool calling in thinking (#14477)

    Ollama – v0.17.3: model: fix qwen3 tool calling in thinking (#14477)

    ๐Ÿšจ Ollama v0.17.3 is live โ€” and itโ€™s fixing a big one for Qwen3 fans! ๐ŸŽฏ

    This patch (#14477) tackles a critical bug where Qwen3 and Qwen3-VL models were failing to properly handle tool calls during the “thinking” phase โ€” i.e., before “ closes.

    ๐Ÿ”ง Whatโ€™s fixed?

    โœ… Tool-call detection now works mid-think: The model correctly spots `<tool_call>` (tool call start tag) while still in thinking mode and smoothly transitions into tool-parsing โ€” matching Hugging Face Transformers behavior.

    โœ… Robust tag parsing: Handles overlapping or partial tags (e.g., `<tool_call>` appearing before “) without breaking.

    โœ… Streaming-safe: Works reliably even when `<tool_call>` is split across chunks in streaming responses.

    ๐Ÿง  Why youโ€™ll care:

    This fix makes Qwen3-family models production-ready for agent workflows, tool-using assistants, and apps that rely on structured function/tool invocation โ€” no more silent failures mid-call!

    ๐Ÿ“ฆ Update now:

    “`bash

    ollama pull qwen3 # for text models

    ollama pull qwen3vl # for vision-language variants

    “`

    Happy tool-calling! ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.17.2

    Ollama – v0.17.2

    ๐Ÿšจ Ollama v0.17.2 is live! ๐Ÿšจ

    Hot off the pressโ€”this is a lightweight but super important patch release focused on keeping things smooth, especially for our Windows friends. ๐Ÿ’ปโœจ

    ๐Ÿ”น Critical fix: Resolves a pesky crash bug where the Ollama app would unexpectedly bail on startup if an update was pending.

    โœ… Now, updates flow seamlesslyโ€”no more “why wonโ€™t it open?!” moments.

    No flashy new models or API changes this timeโ€”just solid, reliable housekeeping to keep your local LLMing running like a charm. ๐Ÿ› ๏ธโœจ

    Upgrade soon and say goodbye to launch-day surprises! ๐ŸŽ‰

    Let me know if you want a quick refresher on how to update or try out the latest models. ๐Ÿš€

    ๐Ÿ”— View Release

  • ComfyUI – v0.15.1

    ComfyUI – v0.15.1

    ๐Ÿšจ ComfyUI v0.15.1 is live! ๐Ÿšจ

    The latest patch just dropped โ€” and while the GitHub release notes are a bit mysterious right now, hereโ€™s what we know (and expect) from the v0.15.x lineage:

    ๐Ÿ”น Bug fixes galore โ€” especially for pesky node execution hiccups and memory leaks that plagued v0.15.0

    ๐Ÿ”น UI polish โ€” smoother drag-and-drop, better node snapping, and subtle dark mode tweaks

    ๐Ÿ”น Speed boosts โ€” optimized graph execution for heavy workflows (looking at you, multi-pass upscapers ๐Ÿ˜…)

    ๐Ÿ”น Tech stack updates โ€” better PyTorch 2.1+ compatibility, ONNX tweaks, and CUDA support refinements

    ๐Ÿ”น Security & sandboxing โ€” tighter node isolation for safer custom node usage

    ๐Ÿ’ก Pro tip: If you’re on v0.15.0, this is a safe and recommended upgrade โ€” think of it as the “spring cleaning” release ๐ŸŒธ

    ๐Ÿ”— Grab it now: ComfyUI v0.15.1

    ๐Ÿ’ฌ Want the real changelog? Let me know โ€” Iโ€™ll help you dig into the git diff or Discord tea ๐Ÿซ–

    Happy prompting, folks! ๐ŸŽจโœจ

    ๐Ÿ”— View Release