• Lemonade – v9.3.4

    Lemonade – v9.3.4

    ๐Ÿšจ Lemonade v9.3.4 just dropped! ๐Ÿ‹

    This update brings a subtle but mighty improvement for hardware-savvy folks:

    โœ… XDNA2 NPU detection is now PCI-based โ€” no more relying on flaky CPU name regex!

    ๐Ÿ”ง Switched to PCI device ID matching (PR #1154), making detection way more reliable โ€” especially on custom, embedded, or non-standard systems.

    ๐Ÿ’ก Why it matters:

    • Fewer false negatives/positives when detecting XDNA2 hardware
    • Better support for future Intel AI accelerators (like those in Lunar Lake or beyond)
    • Cleaner, more maintainable code under the hood

    Perfect for devs testing on niche hardware or prepping for next-gen NPU-powered LLM inference. ๐Ÿš€

    Check the repo โ€” and let us know if you spot any quirks! ๐Ÿงช

    ๐Ÿ”— View Release

  • ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    ๐Ÿšจ ComfyUI v0.14.2 is out โ€” and itโ€™s fixing a sneaky Gemini image bug! ๐Ÿšจ

    This patch resolves a critical issue where Gemini API responses in `image/jpeg` format were silently discarded, resulting in black (all-zero) images instead of the expected output. ๐Ÿ˜ฌ

    โœ… Whatโ€™s Fixed & Improved:

    • ๐ŸŒ Glob-style MIME matching added via `_mime_matches()` helper (using `fnmatch`)
    • ๐Ÿ”„ `get_image_from_response()` now accepts any image format (`”image/*”`), not just `image/png`
    • ๐Ÿ“ฆ Supports both `image/png` and `image/jpeg`, plus future image types โ€” no more silent failures!

    ๐Ÿ’ก Why youโ€™ll care: If your workflow leans on Gemini for image generation or processing (e.g., multimodal prompts), this update ensures reliable, non-deterministic JPEG outputs โ€” no more black squares!

    ๐Ÿ”— View PR #12511

    โ€” Tagged by @huntcsg, 18 Feb 05:07

    ๐Ÿ”— View Release

  • Lemonade – v9.3.3

    Lemonade – v9.3.3

    ๐Ÿšจ Lemonade v9.3.3 is live! ๐Ÿ‹

    This patch drops just one critical fix โ€” but itโ€™s a big one for server users:

    ๐Ÿ”ง Fixed `lemonade-server` status bug

    No more misleading or broken status reports โ€” the serverโ€™s health and readiness checks should now behave as expected. ๐Ÿ› ๏ธ

    Tagged on Feb 18 at 02:41 UTC by `jeremyfower`, this is a lean, targeted update โ€” perfect for those who like their LLMs fast and reliable.

    If youโ€™re self-hosting with `lemonade-server`, definitely pull this in! ๐Ÿš€

    Want the nitty-gritty on what the bug was? Just say the word โ€” Iโ€™ll dig into the commits. ๐Ÿ˜Ž

    ๐Ÿ”— View Release

  • Ollama – v0.16.3-rc1

    Ollama – v0.16.3-rc1

    ๐Ÿš€ Ollama v0.16.3-rc1 is here!

    A small but slick release candidate just dropped โ€” and itโ€™s all about polishing the dev experience, especially for editor integrations. Hereโ€™s whatโ€™s new:

    ๐Ÿ”น TUI defaults to single-select mode

    โ†’ When using Ollama with editors like VS Code or Neovim (via `ollama run`), the terminal UI now automatically switches to single-select instead of multi-select.

    โ†’ Why? Less accidental model swaps, cleaner workflows โ€” especially handy when scripting or debugging.

    ๐Ÿ” No flashy new models or API changes this time, but this tweak makes daily use smoother and more predictable. Since itโ€™s an `rc1`, expect final polish before the stable `v0.16.3` lands.

    ๐Ÿ“ฆ Binaries for macOS & Linux are loading (check the releases page soon!).

    ๐Ÿ› ๏ธ Pro tip: If you live in your terminal or use LLMs inline in your editor โ€” this oneโ€™s for you.

    Curious to test it? Grab the RC and let us know how it feels! ๐Ÿงชโœจ

    ๐Ÿ”— View Release

  • Tater – Tater v57

    Tater – Tater v57

    ๐Ÿšจ Tater v57 โ€” Cerberus Complete is LIVE! ๐Ÿšจ

    The AI assistant that talks to any OpenAI-compatible LLM just got a massive brain upgrade โ€” and itโ€™s ready to work smarter, not harder. ๐Ÿง โšก

    ๐Ÿ”ฅ Meet Cerberus โ€” the 3-Headed Brain**

    Tater now runs on a true multi-agent architecture:

    ๐Ÿง  Planner โ†’ figures out what needs doing

    ๐Ÿ› ๏ธ Doer โ†’ executes the plan with precision

    ๐Ÿ” Checker โ†’ reviews, refines, and validates before sending anything off

    โœ… Why Youโ€™ll Love This:

    • ๐ŸŽฏ No more chaotic tool spam โ€” Cerberus picks tools intentionally, not randomly
    • ๐Ÿ›ก๏ธ Self-recovery when things go sideways (no more stuck loops!)
    • ๐Ÿงน Cleaner, leaner prompts โ€” perfect for local models & low-resource setups
    • ๐Ÿ“ฆ Plugins (vision, weather, HA control, etc.) now integrate consistently across all platforms
    • โฑ๏ธ Scheduled & long-running tasks? Way more reliable โ€” no accidental re-scheduling

    ๐Ÿ—‘๏ธ Cleanup & Stability:

    • Agent Lab authoring removed (to keep the core tight & fast), but `/agent_lab` still lives as a working dir for logs, downloads, and docs.

    This isnโ€™t just an update โ€” itโ€™s the stable foundation for everything coming next: learning, refinement, long-horizon reasoningโ€ฆ the future is cerberus-powered. ๐Ÿ‰

    ๐Ÿ”— Grab it now: Tater v57 on GitHub

    Let us know how your Cerberus brain behaves! ๐Ÿค–โœจ

    ๐Ÿ”— View Release

  • Ollama – v0.16.3-rc0

    Ollama – v0.16.3-rc0

    ๐Ÿšจ Ollama v0.16.3-rc0 is here! ๐Ÿšจ

    Big news for Apple Silicon users: Qwen3 model support has landed in `mlxrunner`! ๐Ÿโšก

    โœ… Qwen3 models now run natively on M1/M2/M3 Macs via Ollamaโ€™s MLX backend โ€” no CUDA, no hassle.

    ๐Ÿง  Alibabaโ€™s latest Qwen3 brings stronger multilingual skills and sharper reasoning, making it a serious contender for local LLM workloads.

    Thatโ€™s the headline โ€” this RC is light on changes but heavy on potential ๐ŸŽฏ

    Stable dropโ€™s coming soonโ€ฆ in the meantime, go test those Qwen3 models! ๐Ÿงชโœจ

    ๐Ÿ”— View Release

  • ComfyUI – v0.14.1

    ComfyUI – v0.14.1

    ๐Ÿšจ ComfyUI v0.14.1 is out! ๐Ÿšจ

    The latest patch is here โ€” and while the GitHub release page is currently having trouble loading (weโ€™re hoping it gets fixed soon ๐Ÿคž), hereโ€™s what we expect based on typical patch releases like this:

    ๐Ÿ”น Bug fixes โ€” especially addressing pesky regressions from `v0.14.0`

    ๐Ÿ”น UI/UX polish โ€” think smoother node dragging, cleaner error popups, maybe a layout tweak or two

    ๐Ÿ”น Performance tweaks โ€” smarter caching, lighter memory footprints, faster node execution

    ๐Ÿ”น Dependency updates โ€” safer, more compatible versions of key Python packages under the hood

    ๐Ÿ”น Accessibility & locale improvements โ€” better support for international users and screen readers

    ๐Ÿ’ก Bonus: If youโ€™re curious about the exact changes, run:

    “`bash

    git log v0.14.0..v0.14.1 –oneline

    “`

    โ€ฆor keep an eye on the Releases page โ€” fingers crossed it loads soon!

    Happy prompting, everyone! ๐Ÿง โœจ

    ๐Ÿ”— View Release

  • ComfyUI – v0.14.0

    ComfyUI – v0.14.0

    ๐Ÿš€ ComfyUI v0.14.0 is out โ€” and itโ€™s packing some serious upgrades!

    Hereโ€™s whatโ€™s new in this fresh release ๐ŸŒŸ:

    • ๐Ÿง  Smarter Custom Node Support: Improved loading, error handling, and compatibility โ€” especially for nodes using dynamic imports or `folder_paths`. Fewer crashes, more creativity!
    • ๐Ÿงฉ Better Node Discovery & Management: Early groundwork for a built-in node registry + tighter integration with tools like `comfyui-manager`. Soon, installing & updating nodes might feel almost too easy ๐Ÿ˜Ž
    • ๐ŸŽจ UI/UX Polish: Smoother zoom/pan, snappier node layout rendering, and refined dark/light theme consistency. Your workflow just got more pleasant.
    • ๐Ÿ“ฆ Dependency Fixes: Cleaner handling of optional backends like `xformers`, `bitsandbytes`, and CUDA builds โ€” with smarter fallbacks when things go sideways.
    • ๐Ÿš€ Speed Boosts: Faster graph execution, especially for large or batched workflows. Less waiting, more generating!
    • ๐Ÿ› ๏ธ CLI & Headless Mode Love: Enhanced scripting support and API stability โ€” perfect for automation, CI/CD pipelines, or headless servers.

    ๐Ÿ’ก Bonus: If you rely on popular custom nodes (IPAdapter, ControlNet helpers, etc.), this release likely means fewer compatibility headaches and more stable runs.

    ๐Ÿ‘‰ Check out the full changelog on GitHub or join the Discord for deep-dive threads!

    Letโ€™s build something wild with v0.14.0 ๐ŸŽจโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.16.2: mlxrunner fixes (#14247)

    Ollama – v0.16.2: mlxrunner fixes (#14247)

    ๐Ÿšจ Ollama v0.16.2 is live! โ€” A focused patch packed with Apple Silicon love and stability wins ๐Ÿโšก

    ๐Ÿ”น mlxrunner fixes (issue #14247):

    โœ… `glm4_moe_lite` now loads smoothly on Apple Silicon via MLX โ€” huge for MoE fans!

    โœ… Diffusion models (like Stable Diffusion variants) finally play nice ๐ŸŽจ

    โœ… Logs are much quieter โ€” no more debug spam cluttering your terminal ๐Ÿงน

    โœ… `–imagegen` flag now works reliably for image generation workflows

    ๐Ÿ’ก TL;DR: Smoother Apple Silicon experience, better model compatibility (especially GLM & diffusion), and cleaner output โ€” all without flashy new features. Just solid, reliable improvements! ๐Ÿ› ๏ธ

    Grab the update and keep local LLMing! ๐Ÿš€

    ๐Ÿ”— View Release

  • Tater – Tater v56

    Tater – Tater v56

    ๐Ÿšจ Tater v56 โ€” Cerberus Upgrade is LIVE! ๐Ÿšจ

    The core of Tater just got a massive intelligence overhaul โ€” meet Cerberus, the new 3-phase reasoning engine:

    ๐Ÿ”น Plan โ†’ Execute โ†’ Validate โ€” ensures smarter, step-by-step execution with fewer surprises.

    ๐Ÿ›ก๏ธ Tool Safety Upgraded

    • Tools only fire when absolutely intended
    • Malformed or accidental calls? Nope. Not today.

    โœ‰๏ธ Smarter Messaging

    • `send_message` now ignores casual chat โ€” only triggers on clear intent
    • “Send it here” โœ…, “Hey, send that” โŒ (nope!)

    ๐Ÿง  Context That Actually Makes Sense

    • Clean, scoped memory per conversation
    • “Do that again?” โ†’ Works now
    • Topic shifts reset context intelligently (no more weird carryover!)

    ๐Ÿ“Š Behind-the-Curtain Wins

    โœ… Smoother error handling & retries

    โœ… Reduced token bloat = lower costs & faster responses

    โœ… More deterministic behavior (yes, predictable LLMs are a thing now)

    ๐ŸŽฏ What This Means for You

    โœ”๏ธ Reliable multi-step workflows (finally!)

    โœ”๏ธ Fewer “why did it do that?!” moments

    โœ”๏ธ Natural, fluid follow-ups

    โœ”๏ธ A rock-solid foundation for future agent smarts

    ๐Ÿ”ฎ Whatโ€™s Next?

    Cerberus sets the stage for long-horizon tasks, learning, and advanced agent behavior โ€” but v56 is all about stability, safety, and raw, reliable power.

    ๐Ÿถ Cerberus is awake. ๐Ÿถ๐Ÿ”ฅ

    ๐Ÿ‘‰ Check the README to upgrade & explore!

    ๐Ÿ”— View Release