• ComfyUI – v0.14.1

    ComfyUI – v0.14.1

    ๐Ÿšจ ComfyUI v0.14.1 is out! ๐Ÿšจ

    The latest patch is here โ€” and while the GitHub release page is currently having trouble loading (weโ€™re hoping it gets fixed soon ๐Ÿคž), hereโ€™s what we expect based on typical patch releases like this:

    ๐Ÿ”น Bug fixes โ€” especially addressing pesky regressions from `v0.14.0`

    ๐Ÿ”น UI/UX polish โ€” think smoother node dragging, cleaner error popups, maybe a layout tweak or two

    ๐Ÿ”น Performance tweaks โ€” smarter caching, lighter memory footprints, faster node execution

    ๐Ÿ”น Dependency updates โ€” safer, more compatible versions of key Python packages under the hood

    ๐Ÿ”น Accessibility & locale improvements โ€” better support for international users and screen readers

    ๐Ÿ’ก Bonus: If youโ€™re curious about the exact changes, run:

    “`bash

    git log v0.14.0..v0.14.1 –oneline

    “`

    โ€ฆor keep an eye on the Releases page โ€” fingers crossed it loads soon!

    Happy prompting, everyone! ๐Ÿง โœจ

    ๐Ÿ”— View Release

  • ComfyUI – v0.14.0

    ComfyUI – v0.14.0

    ๐Ÿš€ ComfyUI v0.14.0 is out โ€” and itโ€™s packing some serious upgrades!

    Hereโ€™s whatโ€™s new in this fresh release ๐ŸŒŸ:

    • ๐Ÿง  Smarter Custom Node Support: Improved loading, error handling, and compatibility โ€” especially for nodes using dynamic imports or `folder_paths`. Fewer crashes, more creativity!
    • ๐Ÿงฉ Better Node Discovery & Management: Early groundwork for a built-in node registry + tighter integration with tools like `comfyui-manager`. Soon, installing & updating nodes might feel almost too easy ๐Ÿ˜Ž
    • ๐ŸŽจ UI/UX Polish: Smoother zoom/pan, snappier node layout rendering, and refined dark/light theme consistency. Your workflow just got more pleasant.
    • ๐Ÿ“ฆ Dependency Fixes: Cleaner handling of optional backends like `xformers`, `bitsandbytes`, and CUDA builds โ€” with smarter fallbacks when things go sideways.
    • ๐Ÿš€ Speed Boosts: Faster graph execution, especially for large or batched workflows. Less waiting, more generating!
    • ๐Ÿ› ๏ธ CLI & Headless Mode Love: Enhanced scripting support and API stability โ€” perfect for automation, CI/CD pipelines, or headless servers.

    ๐Ÿ’ก Bonus: If you rely on popular custom nodes (IPAdapter, ControlNet helpers, etc.), this release likely means fewer compatibility headaches and more stable runs.

    ๐Ÿ‘‰ Check out the full changelog on GitHub or join the Discord for deep-dive threads!

    Letโ€™s build something wild with v0.14.0 ๐ŸŽจโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.16.2: mlxrunner fixes (#14247)

    Ollama – v0.16.2: mlxrunner fixes (#14247)

    ๐Ÿšจ Ollama v0.16.2 is live! โ€” A focused patch packed with Apple Silicon love and stability wins ๐Ÿโšก

    ๐Ÿ”น mlxrunner fixes (issue #14247):

    โœ… `glm4_moe_lite` now loads smoothly on Apple Silicon via MLX โ€” huge for MoE fans!

    โœ… Diffusion models (like Stable Diffusion variants) finally play nice ๐ŸŽจ

    โœ… Logs are much quieter โ€” no more debug spam cluttering your terminal ๐Ÿงน

    โœ… `–imagegen` flag now works reliably for image generation workflows

    ๐Ÿ’ก TL;DR: Smoother Apple Silicon experience, better model compatibility (especially GLM & diffusion), and cleaner output โ€” all without flashy new features. Just solid, reliable improvements! ๐Ÿ› ๏ธ

    Grab the update and keep local LLMing! ๐Ÿš€

    ๐Ÿ”— View Release

  • Tater – Tater v56

    Tater – Tater v56

    ๐Ÿšจ Tater v56 โ€” Cerberus Upgrade is LIVE! ๐Ÿšจ

    The core of Tater just got a massive intelligence overhaul โ€” meet Cerberus, the new 3-phase reasoning engine:

    ๐Ÿ”น Plan โ†’ Execute โ†’ Validate โ€” ensures smarter, step-by-step execution with fewer surprises.

    ๐Ÿ›ก๏ธ Tool Safety Upgraded

    • Tools only fire when absolutely intended
    • Malformed or accidental calls? Nope. Not today.

    โœ‰๏ธ Smarter Messaging

    • `send_message` now ignores casual chat โ€” only triggers on clear intent
    • “Send it here” โœ…, “Hey, send that” โŒ (nope!)

    ๐Ÿง  Context That Actually Makes Sense

    • Clean, scoped memory per conversation
    • “Do that again?” โ†’ Works now
    • Topic shifts reset context intelligently (no more weird carryover!)

    ๐Ÿ“Š Behind-the-Curtain Wins

    โœ… Smoother error handling & retries

    โœ… Reduced token bloat = lower costs & faster responses

    โœ… More deterministic behavior (yes, predictable LLMs are a thing now)

    ๐ŸŽฏ What This Means for You

    โœ”๏ธ Reliable multi-step workflows (finally!)

    โœ”๏ธ Fewer “why did it do that?!” moments

    โœ”๏ธ Natural, fluid follow-ups

    โœ”๏ธ A rock-solid foundation for future agent smarts

    ๐Ÿ”ฎ Whatโ€™s Next?

    Cerberus sets the stage for long-horizon tasks, learning, and advanced agent behavior โ€” but v56 is all about stability, safety, and raw, reliable power.

    ๐Ÿถ Cerberus is awake. ๐Ÿถ๐Ÿ”ฅ

    ๐Ÿ‘‰ Check the README to upgrade & explore!

    ๐Ÿ”— View Release

  • Heretic – v1.2.0

    Heretic – v1.2.0

    ๐Ÿšจ Heretic v1.2.0 is live โ€” and itโ€™s packing serious upgrades! ๐Ÿšจ

    The team just dropped a massive update to Heretic, the fully automatic censorship-removal tool for LLMs โ€” and this oneโ€™s packed with performance boosts, new features, and rock-solid stability fixes. Hereโ€™s whatโ€™s new:

    ๐Ÿ”น Memory & Stability Wins

    • `amax_memory` setting to cap RAM usage ๐Ÿง 
    • Smarter iteration logic to avoid getting stuck in low-divergence traps โš™๏ธ
    • Magnitude-Preserving Orthogonal Ablation โ€” finer control over refusal suppression ๐ŸŽฏ

    ๐Ÿ”น LoRA + Quantization Power-Ups

    • Brand-new LoRA-based abliteration engine ๐Ÿช„
    • Full 4-bit quantization support โ€” run decensoring on consumer GPUs ๐Ÿ–ฅ๏ธ
    • Auto-GPU detection at startup (no more manual config headaches!) ๐Ÿค–

    ๐Ÿ”น Resilience & Flexibility

    • Run extra optimization trials after the main run finishes ๐Ÿ”„
    • Save/resume support for long jobs โ€” no more lost progress! ๐Ÿ’พ
    • Fixed MXFP4 loading issues ๐Ÿ› ๏ธ
    • Refactored save/load machinery = way more reliable ๐Ÿ“ฆ

    ๐Ÿ”น Vision-Language Models? Now Supported!

    • Broad support for VL models (e.g., LLaVA-style) ๐Ÿ“ธโžก๏ธ๐Ÿง 
    • Plus: full type checking, debug tools, slop-reduction configs, and bug fixes ๐Ÿž

    ๐ŸŒŸ Shoutout to @noctrex, @accemlcc, @anrp, and @salmanmkc โ€” welcome to the crew!

    ๐Ÿ”— Full changelog: v1.1.0 โ†’ v1.2.0

    ๐Ÿš€ Grab it now and keep decensoring smarter, faster, and safer!

    ๐Ÿ”— View Release

  • Ollama – v0.16.2-rc0: mlxrunner fixes (#14247)

    Ollama – v0.16.2-rc0: mlxrunner fixes (#14247)

    ๐Ÿšจ Ollama v0.16.2-rc0 is here โ€” and itโ€™s bringing serious Apple Silicon love! ๐Ÿ

    This release candidate focuses on fixing critical bugs in the `mlxrunner` backend (hello, M1/M2/M3 users ๐Ÿ‘‹), making it way more reliable for cutting-edge models. Hereโ€™s whatโ€™s new:

    ๐Ÿ”น GLM4-MoE-Lite now loads smoothly via `mlxrunner` โ€” no more crashes on M-series!

    ๐Ÿ”น Diffusion models (like Stable Diffusion) โ€” finally load without hiccups ๐Ÿ–ผ๏ธ

    ๐Ÿ”น Cleaner logs โ€” less noise, more signal (goodbye, endless debug spam ๐Ÿงน)

    ๐Ÿ”น `–imagegen` flag now actually works for image generation workflows โœ…

    ๐Ÿ’ก Pro tip: This is a release candidate, so itโ€™s the perfect time to test stability before v0.16.2 drops โ€” especially if youโ€™re into MoE models or local image gen on Mac.

    Ready to give it a spin? ๐Ÿงช Let us know how it goes!

    ๐Ÿ”— View Release

  • Home Assistant Voice Pe – 26.2.1

    Home Assistant Voice Pe – 26.2.1

    ๐Ÿš€ Home Assistant Voice PE v26.2.1 is live!

    The latest update (from v25.12.4 โ†’ v26.2.1) is all about polish and reliabilityโ€”perfect for those relying on voice control in their smart homes, especially offline. Hereโ€™s whatโ€™s improved:

    โœ… Media playback is smoother & more stable

    No more skips or dropoutsโ€”your voice-triggered music, alerts, and announcements now play cleanly.

    ๐Ÿ› ๏ธ TTS timeouts fixed!

    Text-to-speech responses now fully render and playโ€”no more truncated or missing voice replies. ๐ŸŽ™๏ธโœจ

    ๐Ÿ’ก Bonus: The project is now officially sponsored by the Open Home Foundationโ€”a big vote of confidence in its mission for private, local-first voice control. ๐Ÿก๐Ÿ”

    78 releases down, more innovation ahead! ๐Ÿ› ๏ธ

    Check it out if youโ€™re building or expanding a local, privacy-first voice assistant setup. ๐ŸŽฏ

    ๐Ÿ”— View Release

  • Ollama – v0.16.1

    Ollama – v0.16.1

    ๐Ÿšจ Ollama v0.16.1 is live! ๐Ÿšจ

    Hey AI tinkerers & local LLM lovers โ€” fresh update incoming! ๐Ÿ”ฅ

    Whatโ€™s new in v0.16.1?

    ๐Ÿ”น New model config added: `minimax-m2.5` ๐Ÿง 

    • Looks like a fresh MiniMax model variant (internal/experimental for now โ€” keep an eye out for docs!).
    • You can already pull it via `ollama pull minimax-m2.5` if youโ€™re feeling adventurous ๐Ÿ› ๏ธ

    ๐Ÿ”น Lightweight patch release โ€” no breaking changes, just lean & mean model support upgrades.

    ๐Ÿ“ฆ Binaries are rolling out for macOS, Windows, and Linux โ€” grab the latest from GitHub or update via your package manager.

    ๐Ÿ‘‰ v0.16.1 Release Notes

    Let us know if you get `minimax-m2.5` running โ€” curious to hear your benchmarks and use cases! ๐Ÿงชโœจ

    ๐Ÿ”— View Release

  • Lemonade – v9.3.2

    Lemonade – v9.3.2

    ๐Ÿš€ Lemonade v9.3.2 is live!

    This oneโ€™s a quick but important patchโ€”especially if youโ€™re rocking AMD GPUs on Linux.

    ๐Ÿ”ง Whatโ€™s new/fixed:

    • โœ… Fixed incorrect path for Stable Diffusion ROCm artifacts on Linux

    โ†’ Fixes runtime hiccups and ensures proper loading of AMD GPU binaries

    โ†’ PR: #1085 | Commit: `5a382c5` (GPG verified!)

    ๐ŸŽฏ Why it matters:

    • ROCm users on Linux can now run SD models reliablyโ€”no more path-related crashes or config headaches.
    • No breaking changes, no flashy new featuresโ€ฆ just solid, quiet reliability ๐Ÿ› ๏ธ

    If youโ€™re using Lemonade with AMD/NPU/GPU acceleration on Linuxโ€”update now! ๐Ÿงโœจ

    Full details: lemonade-sdk/lemonade

    ๐Ÿ”— View Release

  • MLX-LM – v0.30.7

    MLX-LM – v0.30.7

    ๐Ÿš€ MLX-LM v0.30.7 is live โ€” and itโ€™s packed with model love, speed boosts, and polish!

    ๐Ÿ”ฅ New Models Added:

    • โœ… GLM-5 โ€” a powerful new contender in the LLM space ๐Ÿง 
    • โœ… Qwen3.5 (text-only) โ€” ideal for high-performance, non-vision tasks
    • โœ… DeepSeek V3.2 improvements โ€” faster indexer & smoother weight loading ๐Ÿ› ๏ธ
    • โœ… Kimi Linear bugs squashed โ€” now stable & reliable

    ๐Ÿ› ๏ธ Tooling Upgrades:

    • ๐Ÿ Pythonic tool calling for LFM2 models (huge thanks to @viktike!)
    • ๐Ÿงฐ New Mistral tool parser โ€” cleaner, more intuitive function/tool integration

    โšก Performance & Training:

    • ๐Ÿ“ˆ Faster DSV3.2 generation โ€” thanks to kernel & op-level optimizations
    • ๐Ÿ“ LongCat MLA support โ€” smarter attention for long-context generations
    • ๐Ÿ” Validation set now optional in training โ€” faster prototyping!

    ๐Ÿ‘ Shoutout to our newest contributors: @viktike & @JJJYmmm โ€” welcome to the crew!

    ๐Ÿ‘‰ Dive into the details: [v0.30.6 โ†’ v0.30.7 Changelog](link-to-changelog)

    Letโ€™s push the limits on Apple silicon โ€” together! ๐Ÿ› ๏ธ๐Ÿ’ปโœจ

    ๐Ÿ”— View Release