• Ollama – v0.17.0-rc2

    Ollama – v0.17.0-rc2

    Ollama v0.17.0‑rc2 just dropped – here’s the quick rundown for your local LLM playground:

    What Ollama does

    A cross‑platform framework that lets you spin up open‑source models (Llama 3, Gemma, Mistral, etc.) locally via a simple CLI and REST API. Perfect for tinkering without cloud lock‑in.

    What’s new in v0.17.0‑rc2

    • Web‑search plugin auto‑install
    • The `cmd/config` step now silently drops the web‑search extension into your user extensions folder, giving you instant internet‑augmented generation—no manual copy‑paste needed.
    • Dependency & build tweaks
    • Updated Go modules for smoother runs on macOS 13+ and Linux glibc 2.35+.
    • Compile‑time optimizations shave ~5 % off cold‑start latency.
    • Improved error handling
    • Clearer messages when a model fails to load (e.g., missing GGUF metadata).
    • Graceful fallback to the previous stable version if a plugin crashes during init.
    • Platform polish
    • Fixed ARM64 Windows crash on first run.
    • Added missing `README.md` links for the new web‑search plugin in release assets.
    • Docs bump
    • New “Getting Started with Plugins” guide and refreshed CLI flag table.

    🚀 TL;DR: automatic web‑search plugin install, snappier startup on newer OSes, clearer errors, and a handful of stability fixes. Happy experimenting!

    🔗 View Release

  • Tater – Tater v58

    Tater – Tater v58

    🚨 Tater v58 — “Tater the Profiler” is LIVE! 🚨

    The AI assistant that chats and remembers has leveled up — and this update is a big one for context, consistency, and personality. Let’s break it down:

    🧠 Introducing the Memory Platform

    Tater now remembers in a smart, intentional way — no more “wait, who are you again?” moments.

    🔹 User Memory: Stores relevant personal info — demographics, habits, goals, even mental health notes (only if shared & intentional).

    🔹 Room Memory: Tracks group context — project goals, inside jokes, timezones, recurring tasks, roles, patterns… like a group brain.

    🔒 Privacy-first design: Personal vs. room info stays strictly separated — no accidental context leaks.

    Bonus Upgrades

    ✅ Smarter tool-use (fewer “oops” moments)

    ✅ Cleaner scheduling & automation

    ✅ Improved prompt stability & UX polish

    ⚠️ Note for power users: Heavy usage (e.g., big Discord servers) may need beefier hardware or hosted models — memory reasoning scales with volume.

    🎯 Bottom line: Tater’s now in sync — thoughtful, consistent, and ready to be your most attentive AI teammate.

    🎶 “It’s not profiling… it’s pattern recognition.” 😎

    👉 Check out the README to upgrade!

    🔗 View Release

  • Ollama – v0.17.0-rc0

    Ollama – v0.17.0-rc0

    🚨 Ollama v0.17.0-rc0 is here — and it’s bringing some exciting under-the-hood upgrades! 🚨

    This release candidate (RC0, commit `3445223`) is still a preview, but here’s what we know so far:

    🔹 OpenCLAW onboarding — Ollama is leveling up its hardware acceleration game! This likely means better support for OpenCL-based devices (think AMD GPUs, Intel iGPUs, etc.), opening the door to faster inference on non-NVIDIA hardware. 🖥️⚡

    💡 Why it matters: If you’ve been waiting for smoother performance on Apple Silicon (beyond Metal) or AMD/Intel GPUs, this could be a big step toward broader GPU compatibility — especially for users outside the NVIDIA ecosystem.

    ⚠️ Note: Full release notes are still pending (GitHub’s having some hiccups right now 🌐), so keep an eye on:

    Ollama Releases

    CHANGELOG.md

    Let’s test, tweak, and tinker — and help shape the final `v0.17.0`! 🛠️✨

    Who’s trying it out first? 👇

    🔗 View Release

  • Ollama – v0.16.3: install: prevent partial download script execution (#14311)

    Ollama – v0.16.3: install: prevent partial download script execution (#14311)

    🚨 Ollama v0.16.3 is live — and it’s all about security & stability! 🔒

    This isn’t a flashy feature drop, but it’s super important for keeping your local LLM setup safe and reliable.

    🔹 What’s new?

    • The install script is now wrapped in a `main` function 🧱
    • If the download gets interrupted (e.g., network hiccup), only the complete, valid script runs
    • Partial or corrupted downloads won’t accidentally execute — goodbye, weird bugs & security risks!

    ✅ Why you’ll love it:

    ✔️ Safer installs, especially on spotty connections

    ✔️ No more half-downloaded scripts causing chaos

    ✔️ Peace of mind — no hidden gotchas

    📦 No new models or API changes this time — just a rock-solid upgrade under the hood.

    👉 Upgrade now via `curl`, `brew`, or your preferred method and keep your local LLMs running clean & secure! 🛡️💻

    🔗 View Release

  • Home Assistant Voice Pe – 26.2.2

    Home Assistant Voice Pe – 26.2.2

    🚨 Home Assistant Voice PE v26.2.2 is live! 🚨

    Big reliability wins in this one — perfect for those of us who love voice control without the lag or glitches.

    🔥 What’s new:

    Media playback just got smoother — fewer dropouts, better sync for audio/video responses.

    🎙️ TTS timeouts fixed! — no more cut-off voice replies; your commands now get full spoken responses.

    💡 Bonus: This release is part of a massive 78-release journey — and now backed by the Open Home Foundation 🌐✨

    All while staying fully offline-capable with ESPHome.

    Perfect time to upgrade if you’re running HA Voice PE — your smart home deserves that buttery-smooth voice UX! 🛠️🔊

    🔗 View Release

  • Ollama – v0.16.3-rc2: install: prevent partial download script execution (#14311)

    Ollama – v0.16.3-rc2: install: prevent partial download script execution (#14311)

    🚨 Ollama v0.16.3-rc2 is here — and it’s all about security & reliability!

    This isn’t a flashy feature drop… but exactly what we need: 🔒 hardened install safety.

    ✅ What’s New:

    • 🛡️ Installer now wrapped in a `main` function (PR #14311)

    → If your download gets interrupted (thanks, spotty Wi-Fi 😅), nothing runs until the full script is safely downloaded.

    → No more half-baked scripts executing mid-download — goodbye, silent failures & security risks!

    🤔 Why You’ll Care:

    • More trustworthy installs, especially on unstable networks or in CI/CD pipelines.
    • Fewer “why did that just hang/crash?” moments 🙌
    • A small change with big implications for dev peace of mind.

    🔗 Check out the RC on GitHub

    📦 Give it a spin — feedback welcome before final release!

    Happy local LLM-ing, folks 🚀

    🔗 View Release

  • Lemonade – v9.3.4

    Lemonade – v9.3.4

    🚨 Lemonade v9.3.4 just dropped! 🍋

    This update brings a subtle but mighty improvement for hardware-savvy folks:

    XDNA2 NPU detection is now PCI-based — no more relying on flaky CPU name regex!

    🔧 Switched to PCI device ID matching (PR #1154), making detection way more reliable — especially on custom, embedded, or non-standard systems.

    💡 Why it matters:

    • Fewer false negatives/positives when detecting XDNA2 hardware
    • Better support for future Intel AI accelerators (like those in Lunar Lake or beyond)
    • Cleaner, more maintainable code under the hood

    Perfect for devs testing on niche hardware or prepping for next-gen NPU-powered LLM inference. 🚀

    Check the repo — and let us know if you spot any quirks! 🧪

    🔗 View Release

  • ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    ComfyUI – v0.14.2: fix: use glob matching for Gemini image MIME types (#12511)

    🚨 ComfyUI v0.14.2 is out — and it’s fixing a sneaky Gemini image bug! 🚨

    This patch resolves a critical issue where Gemini API responses in `image/jpeg` format were silently discarded, resulting in black (all-zero) images instead of the expected output. 😬

    ✅ What’s Fixed & Improved:

    • 🌐 Glob-style MIME matching added via `_mime_matches()` helper (using `fnmatch`)
    • 🔄 `get_image_from_response()` now accepts any image format (`”image/*”`), not just `image/png`
    • 📦 Supports both `image/png` and `image/jpeg`, plus future image types — no more silent failures!

    💡 Why you’ll care: If your workflow leans on Gemini for image generation or processing (e.g., multimodal prompts), this update ensures reliable, non-deterministic JPEG outputs — no more black squares!

    🔗 View PR #12511

    — Tagged by @huntcsg, 18 Feb 05:07

    🔗 View Release

  • Lemonade – v9.3.3

    Lemonade – v9.3.3

    🚨 Lemonade v9.3.3 is live! 🍋

    This patch drops just one critical fix — but it’s a big one for server users:

    🔧 Fixed `lemonade-server` status bug

    No more misleading or broken status reports — the server’s health and readiness checks should now behave as expected. 🛠️

    Tagged on Feb 18 at 02:41 UTC by `jeremyfower`, this is a lean, targeted update — perfect for those who like their LLMs fast and reliable.

    If you’re self-hosting with `lemonade-server`, definitely pull this in! 🚀

    Want the nitty-gritty on what the bug was? Just say the word — I’ll dig into the commits. 😎

    🔗 View Release

  • Ollama – v0.16.3-rc1

    Ollama – v0.16.3-rc1

    🚀 Ollama v0.16.3-rc1 is here!

    A small but slick release candidate just dropped — and it’s all about polishing the dev experience, especially for editor integrations. Here’s what’s new:

    🔹 TUI defaults to single-select mode

    → When using Ollama with editors like VS Code or Neovim (via `ollama run`), the terminal UI now automatically switches to single-select instead of multi-select.

    → Why? Less accidental model swaps, cleaner workflows — especially handy when scripting or debugging.

    🔍 No flashy new models or API changes this time, but this tweak makes daily use smoother and more predictable. Since it’s an `rc1`, expect final polish before the stable `v0.16.3` lands.

    📦 Binaries for macOS & Linux are loading (check the releases page soon!).

    🛠️ Pro tip: If you live in your terminal or use LLMs inline in your editor — this one’s for you.

    Curious to test it? Grab the RC and let us know how it feels! 🧪✨

    🔗 View Release