• Lemonade – v9.4.1

    Lemonade – v9.4.1

    _New update detected._

    🔗 View Release

  • Voxtral Wyoming – v0.3.0

    Voxtral Wyoming – v0.3.0

    _New update detected._

    🔗 View Release

  • Ollama – v0.17.4

    Ollama – v0.17.4

    🚀 Ollama v0.17.4 is live! Here’s what’s fresh in this patch release:

    🔹 Stable Tool Calling for GLM-4 & Qwen3

    ✅ Reliable tool/function calling support—no more misaligned or garbled tool outputs!

    ✅ Works seamlessly with `curl`, Python clients, and custom tools via the Ollama API.

    🔹 Better JSON & Parser Handling

    🧠 Internal upgrades to model parsers—especially for Chinese-language models (GLM, Qwen).

    📊 More consistent parsing of JSON-formatted tool responses.

    🔹 Minor Fixes & Tweaks

    ⚙️ Performance bumps, bug fixes, and general polish—zero breaking changes.

    Perfect for anyone relying on structured outputs or tool integrations with local LLMs. Try it out and let us know how your tool-calling workflows feel! 🛠️✨

    🔗 View Release

  • Ollama – v0.17.3: model: fix qwen3 tool calling in thinking (#14477)

    Ollama – v0.17.3: model: fix qwen3 tool calling in thinking (#14477)

    🚨 Ollama v0.17.3 is live — and it’s fixing a big one for Qwen3 fans! 🎯

    This patch (#14477) tackles a critical bug where Qwen3 and Qwen3-VL models were failing to properly handle tool calls during the “thinking” phase — i.e., before “ closes.

    🔧 What’s fixed?

    Tool-call detection now works mid-think: The model correctly spots `<tool_call>` (tool call start tag) while still in thinking mode and smoothly transitions into tool-parsing — matching Hugging Face Transformers behavior.

    Robust tag parsing: Handles overlapping or partial tags (e.g., `<tool_call>` appearing before “) without breaking.

    Streaming-safe: Works reliably even when `<tool_call>` is split across chunks in streaming responses.

    🧠 Why you’ll care:

    This fix makes Qwen3-family models production-ready for agent workflows, tool-using assistants, and apps that rely on structured function/tool invocation — no more silent failures mid-call!

    📦 Update now:

    “`bash

    ollama pull qwen3 # for text models

    ollama pull qwen3vl # for vision-language variants

    “`

    Happy tool-calling! 🛠️✨

    🔗 View Release

  • Ollama – v0.17.2

    Ollama – v0.17.2

    🚨 Ollama v0.17.2 is live! 🚨

    Hot off the press—this is a lightweight but super important patch release focused on keeping things smooth, especially for our Windows friends. 💻✨

    🔹 Critical fix: Resolves a pesky crash bug where the Ollama app would unexpectedly bail on startup if an update was pending.

    ✅ Now, updates flow seamlessly—no more “why won’t it open?!” moments.

    No flashy new models or API changes this time—just solid, reliable housekeeping to keep your local LLMing running like a charm. 🛠️✨

    Upgrade soon and say goodbye to launch-day surprises! 🎉

    Let me know if you want a quick refresher on how to update or try out the latest models. 🚀

    🔗 View Release

  • ComfyUI – v0.15.1

    ComfyUI – v0.15.1

    🚨 ComfyUI v0.15.1 is live! 🚨

    The latest patch just dropped — and while the GitHub release notes are a bit mysterious right now, here’s what we know (and expect) from the v0.15.x lineage:

    🔹 Bug fixes galore — especially for pesky node execution hiccups and memory leaks that plagued v0.15.0

    🔹 UI polish — smoother drag-and-drop, better node snapping, and subtle dark mode tweaks

    🔹 Speed boosts — optimized graph execution for heavy workflows (looking at you, multi-pass upscapers 😅)

    🔹 Tech stack updates — better PyTorch 2.1+ compatibility, ONNX tweaks, and CUDA support refinements

    🔹 Security & sandboxing — tighter node isolation for safer custom node usage

    💡 Pro tip: If you’re on v0.15.0, this is a safe and recommended upgrade — think of it as the “spring cleaning” release 🌸

    🔗 Grab it now: ComfyUI v0.15.1

    💬 Want the real changelog? Let me know — I’ll help you dig into the git diff or Discord tea 🫖

    Happy prompting, folks! 🎨✨

    🔗 View Release

  • Ollama – v0.17.1

    Ollama – v0.17.1

    🚨 Ollama v0.17.1 is live! 🚨

    This one’s a micro-patch—but a sweet, smooth one:

    🔹 Fixed: The first update check was mysteriously delayed by 1 hour 🕒

    → Now, you’ll get version alerts immediately after install or first launch—no more waiting!

    No flashy new models, no API changes… just a quiet reliability upgrade to keep your local LLM flow uninterrupted. 🛠️✨

    Perfect for keeping your setup fresh, fast, and future-proof! 🚀

    (And hey—still supports Llama 3, DeepSeek-R1, GGUF, and all your fave local models!)

    🔗 View Release

  • Lemonade – v9.4.0: Add connection status to the status bar (#1167)

    Lemonade – v9.4.0: Add connection status to the status bar (#1167)

    Lemonade v9.4.0 – Add connection‑status indicator to the status bar (#1167)

    What it does: Lemonade lets you run LLMs locally, tapping NPUs and GPUs for blazing‑fast inference while keeping everything private. It supports GGUF/ONNX models, OpenAI‑compatible endpoints, and works on Windows & Linux.

    What’s fresh in 9.4.0

    • Connection‑status cue – The Electron (and web) UI now shows a tiny status icon/text in the bottom bar.
    • Shows “connecting…” while it pings the backend.
    • Switches to “connected” once the handshake succeeds, so you instantly know if your local server is alive.

    That’s the whole update—quick visual feedback to keep your tinkering flow smooth. 🚀

    🔗 View Release

  • Ollama – v0.17.1-rc2

    Ollama – v0.17.1-rc2

    Ollama v0.17.1‑rc2 just dropped! 🎉

    What Ollama does

    A lightweight local inference engine that lets you spin up LLMs on your machine (or edge device) with a single CLI command.

    What’s new in this RC

    • Qwen 3.5‑27B model support – run the latest 27‑billion‑parameter Qwen 3.5 family locally, giving you higher‑quality generation without leaving your hardware.
    • Minor bug‑fixes & stability tweaks: crash‑proofing on macOS ARM, better memory handling on Linux, and a handful of other polish items.

    Why it matters

    You can now experiment with the cutting‑edge Qwen 3.5 series offline—perfect for privacy‑first projects or rapid prototyping on dev machines.

    💡 Quick tip: after updating, run `ollama pull qwen3.5-27b` to cache the model locally and enjoy instant start‑up times.

    🔗 View Release

  • Ollama – v0.17.1-rc1

    Ollama – v0.17.1-rc1

    Ollama v0.17.1‑rc1 just dropped! 🎉

    What’s fresh:

    • New model added: qwen‑3.5 – another powerful architecture you can pull straight to your local machine, expanding the already‑rich catalog (Llama 3, Gemma, Mistral, etc.).
    • Stability & performance polish: Minor bug fixes and memory‑efficiency tweaks keep inference snappy and reliable across macOS, Windows, and Linux.

    Quick recap: more model options + smoother runs. Time to pull the update and give qwen‑3.5 a spin! 🚀

    🔗 View Release