• Home Assistant Voice Pe – 25.12.3

    Home Assistant Voice Pe – 25.12.3

    πŸš€ Home Assistant Voice PE just dropped v25.12.3 β€” and it’s a quiet hero for ESPHome tinkerers!

    Fixed that nasty `web_server` conflict where custom firmwares would crash when web UI and voice logic fought for resources. No more 502s. No more unresponsive dashboards.

    If you’re running DIY voice control on ESP32/ESP8266 with Home Assistant β€” this patch is your new best friend.

    Sponsored by the Open Home Foundation 🌱 (78 releases and counting!)

    Upgrade now β†’ 25.12.1…25.12.3

    πŸ”— View Release

  • ComfyUI – v0.6.0

    ComfyUI – v0.6.0

    ComfyUI v0.6.0 just landed β€” and it’s like giving your AI workflow a turbo boost πŸš€

    • Native WebSockets β†’ Real-time node updates & buttery-smooth previews (yes, even on mobile!)
    • Custom Node Manager β†’ Install & update nodes right in the UI. No more terminal chaos 🧩
    • Memory Cleanup Overhaul β†’ Big models? Longer sessions? Less crashes. Goodbye, OOM errors.
    • Queue Inspector β†’ See what’s running, pause or cancel jobs mid-process β€” total control.
    • Dark Mode 2.0 β†’ Sleeker, higher contrast, optional accent colors for your aesthetic πŸ–€
    • Python 3.10+ only β†’ Faster, leaner, modern under the hood
    • Windows users: One-click installer is LIVE πŸŽ‰

    Still 100% free. Still open-source. Still built by you.

    Back up your workflows before upgrading β€” custom nodes are about to get a whole lot easier.

    Grab it β†’ https://www.comfy.org/

    πŸ”— View Release

  • Home Assistant Voice Pe – 25.12.2

    Home Assistant Voice Pe – 25.12.2

    Big win for offline voice control! 🎀🏠

    Home Assistant Voice PE just dropped v25.12.2 β€” and it’s got some serious upgrades.

    🎧 Sendspin multi-room audio is now in public preview β€” sync your speakers across rooms without touching the cloud. Perfect if you’re already using Music Assistant.

    πŸŽ‰ Huge shoutout to @theHacker for their first-ever contribution β€” welcome to the crew!

    Sponsored by the Open Home Foundation, with 78 releases under its belt β€” this thing is built to last.

    More polish. Smoother sync. Zero internet needed.

    Full details: [25.11.0…25.12.2]

    πŸ”— View Release

  • Home Assistant Voice Pe – 25.12.1

    Home Assistant Voice Pe – 25.12.1

    Home Assistant Voice PE just dropped v25.12.1 πŸš€

    Big win: Now sponsored by the Open Home Foundationβ€”more power to offline voice control! πŸ πŸŽ™οΈ

    78 releases and counting, and this one’s smooth as butter.

    New in 25.12.1:

    • Full Sendspin multi-room audio protocol support (public preview)! Sync music across speakers like a proβ€”no cloud needed.
    • Huge thanks to @theHacker for their first-ever contribution! πŸ™Œ

    Perfect for tinkerers who want voice control without the internet. Upgrade, vibe out, and command your home like a wizard.

    Changelog: [25.11.0…25.12.1]

    πŸ”— View Release

  • Text Generation Webui – v3.22

    Text Generation Webui – v3.22

    πŸš€ text-generation-webui v3.22 is live!

    llama.cpp just got a MASSIVE upgrade β€” now with portable, drop-and-run builds for EVERYONE:

    • πŸ–₯️ Windows/Linux:
    • NVIDIA? `cuda12.4` = blistering speed
    • AMD/Intel GPU? Grab the `vulkan` build β€” no CUDA required!
    • Just CPU? Yep, `cpubuilds` work flawlessly.
    • 🍎 Mac (Apple Silicon):
    • `macos-arm64` = native performance, zero installs.

    ✨ Upgrade tip: Just unzip the new version β†’ copy your old `user_data` folder over.

    Your models, themes, settings β€” all preserved. No reconfiguring. No headaches.

    Perfect for offline labs, travel, or when you just wanna run LLMs without installing 17 dependencies.

    Go grab it β€” your local AI sandbox just got way more portable. 🎯

    πŸ”— View Release

  • MLX-LM – v0.30.0

    MLX-LM – v0.30.0

    MLX LM v0.30.0 is live πŸš€ β€” Apple Silicon LLMs just got a serious power-up!

    • Server performance fixed: No more busy-waiting β€” idle polling is now lean, quiet, and efficient. πŸ› οΈ
    • Transformers v5 fully supported: All the latest tokenizer tweaks, model updates, and Hugging Face magic? Covered. πŸ€–
    • MIMO v2 Flash enabled: Multi-input models now fly with optimized attention β€” faster inference, less latency. ⚑
    • Better error messages: Batching failed? Now you’ll know why β€” no more cryptic crashes. πŸ“’
    • Model parallel generation: Split massive models across GPUs like a pro. Scale your LLMs without rewriting code. 🧩
    • Chat template fixes: `apply_chat_template` finally wraps correctly β€” no more dict chaos in your prompts. ✨

    Thousands of Hugging Face models, quantized, fine-tuned, and served β€” all on your M-series chip. Time to upgrade and push your AI stack further. πŸš€

    πŸ”— View Release

  • Ollama – v0.13.5

    Ollama – v0.13.5

    πŸš€ Ollama v0.13.5 just dropped β€” and it’s a quiet game-changer for Gemma users!

    Now you can use function calling with Gemma 2B locally β€” yes, really. Trigger webhooks, query databases, fetch weather, or call APIs directly from your tiny-but-mighty local Gemma model. No cloud needed.

    πŸ’‘ Why it’s cool:

    • Function calling was already in Llama 3 & Mistral β€” now Gemma joins the party.
    • Perfect for building private, lightweight AI agents that do stuff, not just chat.

    Under the hood: parser fixes + smoother rendering = fewer hiccups, more flow.

    Upgrade in one line:

    “`bash

    ollama pull gemma:2b

    “`

    Go build something that acts β€” not just responds. 🎯

    πŸ”— View Release

  • Ollama – v0.13.5-rc1

    Ollama – v0.13.5-rc1

    πŸš€ Ollama v0.13.5-rc1 just dropped β€” and it’s a game-changer for Gemma users!

    Now you can use function calling with Gemma 2B 🎯

    Gemma can now dynamically invoke external tools and APIs β€” think real-time data lookup, code execution, or API calls β€” all triggered by natural language. No more clunky workarounds. Just define your tools, and let Gemma call them like a pro agent.

    Plus:

    • Smoother JSON parsing for tool definitions (no more malformed calls!)
    • Minor performance boosts and bug fixes
    • RC1 = stable enough for early adopters to test in real workflows

    Grab it: `ollama pull gemma:2b` and start building smarter, reactive agents today.

    Full release notes coming soon β€” but this feature? Totally worth the upgrade. πŸ› οΈ

    πŸ”— View Release

  • ComfyUI – v0.5.1

    ComfyUI – v0.5.1

    ComfyUI v0.5.1 just dropped β€” and it’s a quiet hero πŸ› οΈ

    Fixed critical crashes in custom nodes & image loading (no more mid-generate soul-crushes).

    Memory management got a serious upgrade β€” smoother runs on weaker GPUs, even with massive workflows.

    Error messages now actually tell you what went wrong… no more “something broke” mysteries.

    UI feels slicker: cleaner node links, buttery drag-and-drop.

    And hey β€” partial WebGPU support for Apple Silicon users is now live 🍎⚑ (Big things coming in v0.6!)

    If you’ve been battling crashes or lag, this is your sign to update. Keep building! πŸš€

    πŸ”— View Release

  • Ollama – v0.13.5-rc0: GGML update to ec98e2002 (#13451)

    Ollama – v0.13.5-rc0: GGML update to ec98e2002 (#13451)

    Ollama v0.13.5-rc0 just dropped β€” and it’s all about speed under the hood! πŸš€

    The GGML inference engine got a major upgrade to commit `ec98e2002`, with smarter, leaner internals:

    • βœ… MaskBatchPadding removed β€” Less padding = less overhead. KQ masking is now cleaner and faster.
    • 🚫 NVIDIA Nemotron 3 Nano support paused β€” Temporarily pulled for stability. Coming back stronger soon!
    • πŸ”§ Solar Pro tweaks β€” Under-the-hood adjustments, still being verified. If you’re using Solar, test your models!

    No flashy UI β€” just a lighter, faster engine for local LLM inference. Think of it like swapping your car’s engine for a turbocharged version that runs cooler.

    Pro tip: Custom models? Run sanity checks β€” GGML changes can ripple through quantization and attention layers.

    Stay sharp, tinkerers. The local LLM revolution keeps accelerating. πŸ› οΈ

    πŸ”— View Release