• Ollama – v0.16.3: install: prevent partial download script execution (#14311)

    Ollama – v0.16.3: install: prevent partial download script execution (#14311)

    🚨 Ollama v0.16.3 is live β€” and it’s all about security & stability! πŸ”’

    This isn’t a flashy feature drop, but it’s super important for keeping your local LLM setup safe and reliable.

    πŸ”Ή What’s new?

    • The install script is now wrapped in a `main` function 🧱
    • If the download gets interrupted (e.g., network hiccup), only the complete, valid script runs
    • Partial or corrupted downloads won’t accidentally execute β€” goodbye, weird bugs & security risks!

    βœ… Why you’ll love it:

    βœ”οΈ Safer installs, especially on spotty connections

    βœ”οΈ No more half-downloaded scripts causing chaos

    βœ”οΈ Peace of mind β€” no hidden gotchas

    πŸ“¦ No new models or API changes this time β€” just a rock-solid upgrade under the hood.

    πŸ‘‰ Upgrade now via `curl`, `brew`, or your preferred method and keep your local LLMs running clean & secure! πŸ›‘οΈπŸ’»

    πŸ”— View Release

  • Home Assistant Voice Pe – 26.2.2

    Home Assistant Voice Pe – 26.2.2

    🚨 Home Assistant Voice PE v26.2.2 is live! 🚨

    Big reliability wins in this one β€” perfect for those of us who love voice control without the lag or glitches.

    πŸ”₯ What’s new:

    βœ… Media playback just got smoother β€” fewer dropouts, better sync for audio/video responses.

    πŸŽ™οΈ TTS timeouts fixed! β€” no more cut-off voice replies; your commands now get full spoken responses.

    πŸ’‘ Bonus: This release is part of a massive 78-release journey β€” and now backed by the Open Home Foundation 🌐✨

    All while staying fully offline-capable with ESPHome.

    Perfect time to upgrade if you’re running HA Voice PE β€” your smart home deserves that buttery-smooth voice UX! πŸ› οΈπŸ”Š

    πŸ”— View Release

  • Ollama – v0.16.3-rc2: install: prevent partial download script execution (#14311)

    Ollama – v0.16.3-rc2: install: prevent partial download script execution (#14311)

    🚨 Ollama v0.16.3-rc2 is here β€” and it’s all about security & reliability!

    This isn’t a flashy feature drop… but exactly what we need: πŸ”’ hardened install safety.

    βœ… What’s New:

    • πŸ›‘οΈ Installer now wrapped in a `main` function (PR #14311)

    β†’ If your download gets interrupted (thanks, spotty Wi-Fi πŸ˜…), nothing runs until the full script is safely downloaded.

    β†’ No more half-baked scripts executing mid-download β€” goodbye, silent failures & security risks!

    πŸ€” Why You’ll Care:

    • More trustworthy installs, especially on unstable networks or in CI/CD pipelines.
    • Fewer “why did that just hang/crash?” moments πŸ™Œ
    • A small change with big implications for dev peace of mind.

    πŸ”— Check out the RC on GitHub

    πŸ“¦ Give it a spin β€” feedback welcome before final release!

    Happy local LLM-ing, folks πŸš€

    πŸ”— View Release

  • Lemonade – v9.3.4

    Lemonade – v9.3.4

    🚨 Lemonade v9.3.4 just dropped! πŸ‹

    This update brings a subtle but mighty improvement for hardware-savvy folks:

    βœ… XDNA2 NPU detection is now PCI-based β€” no more relying on flaky CPU name regex!

    πŸ”§ Switched to PCI device ID matching (PR #1154), making detection way more reliable β€” especially on custom, embedded, or non-standard systems.

    πŸ’‘ Why it matters:

    • Fewer false negatives/positives when detecting XDNA2 hardware
    • Better support for future Intel AI accelerators (like those in Lunar Lake or beyond)
    • Cleaner, more maintainable code under the hood

    Perfect for devs testing on niche hardware or prepping for next-gen NPU-powered LLM inference. πŸš€

    Check the repo β€” and let us know if you spot any quirks! πŸ§ͺ

    πŸ”— View Release

  • ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    🚨 ComfyUI v0.14.2 is out β€” and it’s fixing a sneaky Gemini image bug! 🚨

    This patch resolves a critical issue where Gemini API responses in `image/jpeg` format were silently discarded, resulting in black (all-zero) images instead of the expected output. 😬

    βœ… What’s Fixed & Improved:

    • 🌐 Glob-style MIME matching added via `_mime_matches()` helper (using `fnmatch`)
    • πŸ”„ `get_image_from_response()` now accepts any image format (`”image/*”`), not just `image/png`
    • πŸ“¦ Supports both `image/png` and `image/jpeg`, plus future image types β€” no more silent failures!

    πŸ’‘ Why you’ll care: If your workflow leans on Gemini for image generation or processing (e.g., multimodal prompts), this update ensures reliable, non-deterministic JPEG outputs β€” no more black squares!

    πŸ”— View PR #12511

    β€” Tagged by @huntcsg, 18 Feb 05:07

    πŸ”— View Release

  • Lemonade – v9.3.3

    Lemonade – v9.3.3

    🚨 Lemonade v9.3.3 is live! πŸ‹

    This patch drops just one critical fix β€” but it’s a big one for server users:

    πŸ”§ Fixed `lemonade-server` status bug

    No more misleading or broken status reports β€” the server’s health and readiness checks should now behave as expected. πŸ› οΈ

    Tagged on Feb 18 at 02:41 UTC by `jeremyfower`, this is a lean, targeted update β€” perfect for those who like their LLMs fast and reliable.

    If you’re self-hosting with `lemonade-server`, definitely pull this in! πŸš€

    Want the nitty-gritty on what the bug was? Just say the word β€” I’ll dig into the commits. 😎

    πŸ”— View Release

  • Ollama – v0.16.3-rc1

    Ollama – v0.16.3-rc1

    πŸš€ Ollama v0.16.3-rc1 is here!

    A small but slick release candidate just dropped β€” and it’s all about polishing the dev experience, especially for editor integrations. Here’s what’s new:

    πŸ”Ή TUI defaults to single-select mode

    β†’ When using Ollama with editors like VS Code or Neovim (via `ollama run`), the terminal UI now automatically switches to single-select instead of multi-select.

    β†’ Why? Less accidental model swaps, cleaner workflows β€” especially handy when scripting or debugging.

    πŸ” No flashy new models or API changes this time, but this tweak makes daily use smoother and more predictable. Since it’s an `rc1`, expect final polish before the stable `v0.16.3` lands.

    πŸ“¦ Binaries for macOS & Linux are loading (check the releases page soon!).

    πŸ› οΈ Pro tip: If you live in your terminal or use LLMs inline in your editor β€” this one’s for you.

    Curious to test it? Grab the RC and let us know how it feels! πŸ§ͺ✨

    πŸ”— View Release

  • Tater – Tater v57

    Tater – Tater v57

    🚨 Tater v57 β€” Cerberus Complete is LIVE! 🚨

    The AI assistant that talks to any OpenAI-compatible LLM just got a massive brain upgrade β€” and it’s ready to work smarter, not harder. 🧠⚑

    πŸ”₯ Meet Cerberus β€” the 3-Headed Brain**

    Tater now runs on a true multi-agent architecture:

    🧠 Planner β†’ figures out what needs doing

    πŸ› οΈ Doer β†’ executes the plan with precision

    πŸ” Checker β†’ reviews, refines, and validates before sending anything off

    βœ… Why You’ll Love This:

    • 🎯 No more chaotic tool spam β€” Cerberus picks tools intentionally, not randomly
    • πŸ›‘οΈ Self-recovery when things go sideways (no more stuck loops!)
    • 🧹 Cleaner, leaner prompts β€” perfect for local models & low-resource setups
    • πŸ“¦ Plugins (vision, weather, HA control, etc.) now integrate consistently across all platforms
    • ⏱️ Scheduled & long-running tasks? Way more reliable β€” no accidental re-scheduling

    πŸ—‘οΈ Cleanup & Stability:

    • Agent Lab authoring removed (to keep the core tight & fast), but `/agent_lab` still lives as a working dir for logs, downloads, and docs.

    This isn’t just an update β€” it’s the stable foundation for everything coming next: learning, refinement, long-horizon reasoning… the future is cerberus-powered. πŸ‰

    πŸ”— Grab it now: Tater v57 on GitHub

    Let us know how your Cerberus brain behaves! πŸ€–βœ¨

    πŸ”— View Release

  • Ollama – v0.16.3-rc0

    Ollama – v0.16.3-rc0

    🚨 Ollama v0.16.3-rc0 is here! 🚨

    Big news for Apple Silicon users: Qwen3 model support has landed in `mlxrunner`! 🍏⚑

    βœ… Qwen3 models now run natively on M1/M2/M3 Macs via Ollama’s MLX backend β€” no CUDA, no hassle.

    🧠 Alibaba’s latest Qwen3 brings stronger multilingual skills and sharper reasoning, making it a serious contender for local LLM workloads.

    That’s the headline β€” this RC is light on changes but heavy on potential 🎯

    Stable drop’s coming soon… in the meantime, go test those Qwen3 models! πŸ§ͺ✨

    πŸ”— View Release

  • ComfyUI – v0.14.1

    ComfyUI – v0.14.1

    🚨 ComfyUI v0.14.1 is out! 🚨

    The latest patch is here β€” and while the GitHub release page is currently having trouble loading (we’re hoping it gets fixed soon 🀞), here’s what we expect based on typical patch releases like this:

    πŸ”Ή Bug fixes β€” especially addressing pesky regressions from `v0.14.0`

    πŸ”Ή UI/UX polish β€” think smoother node dragging, cleaner error popups, maybe a layout tweak or two

    πŸ”Ή Performance tweaks β€” smarter caching, lighter memory footprints, faster node execution

    πŸ”Ή Dependency updates β€” safer, more compatible versions of key Python packages under the hood

    πŸ”Ή Accessibility & locale improvements β€” better support for international users and screen readers

    πŸ’‘ Bonus: If you’re curious about the exact changes, run:

    “`bash

    git log v0.14.0..v0.14.1 –oneline

    “`

    …or keep an eye on the Releases page β€” fingers crossed it loads soon!

    Happy prompting, everyone! 🧠✨

    πŸ”— View Release