• Lemonade – v9.3.2

    Lemonade – v9.3.2

    🚀 Lemonade v9.3.2 is live!

    This one’s a quick but important patch—especially if you’re rocking AMD GPUs on Linux.

    🔧 What’s new/fixed:

    • ✅ Fixed incorrect path for Stable Diffusion ROCm artifacts on Linux

    → Fixes runtime hiccups and ensures proper loading of AMD GPU binaries

    → PR: #1085 | Commit: `5a382c5` (GPG verified!)

    🎯 Why it matters:

    • ROCm users on Linux can now run SD models reliably—no more path-related crashes or config headaches.
    • No breaking changes, no flashy new features… just solid, quiet reliability 🛠️

    If you’re using Lemonade with AMD/NPU/GPU acceleration on Linux—update now! 🐧✨

    Full details: lemonade-sdk/lemonade

    🔗 View Release

  • MLX-LM – v0.30.7

    MLX-LM – v0.30.7

    🚀 MLX-LM v0.30.7 is live — and it’s packed with model love, speed boosts, and polish!

    🔥 New Models Added:

    • GLM-5 — a powerful new contender in the LLM space 🧠
    • Qwen3.5 (text-only) — ideal for high-performance, non-vision tasks
    • DeepSeek V3.2 improvements — faster indexer & smoother weight loading 🛠️
    • Kimi Linear bugs squashed — now stable & reliable

    🛠️ Tooling Upgrades:

    • 🐍 Pythonic tool calling for LFM2 models (huge thanks to @viktike!)
    • 🧰 New Mistral tool parser — cleaner, more intuitive function/tool integration

    Performance & Training:

    • 📈 Faster DSV3.2 generation — thanks to kernel & op-level optimizations
    • 📏 LongCat MLA support — smarter attention for long-context generations
    • 🔁 Validation set now optional in training — faster prototyping!

    👏 Shoutout to our newest contributors: @viktike & @JJJYmmm — welcome to the crew!

    👉 Dive into the details: [v0.30.6 → v0.30.7 Changelog](link-to-changelog)

    Let’s push the limits on Apple silicon — together! 🛠️💻✨

    🔗 View Release

  • Home Assistant Voice Pe – 26.2.0

    Home Assistant Voice Pe – 26.2.0

    🚨 Home Assistant Voice PE v26.2.0 is live! 🚨

    Hey AI tinkerers & smart home wizards — big shoutout to the latest update of Home Assistant Voice PE, now powered by the awesome folks at the Open Home Foundation 🙌

    🔥 What’s new in v26.2.0?

    Media playback stability improved — fewer stutters, smoother audio/video responses during voice interactions

    🎙️ TTS timeout bug squashed — no more cut-off replies! Full text now plays reliably, every time

    💡 Bonus context: This release builds on 78+ prior releases — and now with offline-first voice control (no internet needed!), it’s perfect for privacy-focused automations.

    Ready to make your smart home talk back? 🛠️✨

    Check the changelog (25.12.4 → 26.2.0) and upgrade!

    🔗 View Release

  • Ollama – v0.16.0

    Ollama – v0.16.0

    🚨 Ollama v0.16.0 is live! 🚨

    The latest drop from the Ollama crew just landed — and while the release notes are light on flashy new features, this one’s a quiet but meaningful polish pass. Here’s the lowdown:

    🔹 API Docs Fixed!

    The OpenAPI schema for `/api/ps` (list running processes) and `/api/tags` (list local models) has been corrected — meaning better Swagger compatibility, smoother SDK generation, and fewer headaches for integrators. 🛠️

    🔹 Stability & Under-the-Hood Tweaks

    Expect refined model loading, improved streaming behavior, and likely minor bug fixes — especially around context handling and memory usage. No breaking changes, just smoother sailing.

    🔹 Still GGUF-Friendly

    All your favorite quantized models (Llama 3, DeepSeek-R1, Phi-4, etc.) keep rolling — no format changes here.

    💡 Pro Tip: If you’re building tools or dashboards against Ollama’s REST API, this update makes your life easier. Run `ollama pull ollama/ollama:latest` or grab the latest binary from GitHub.

    👉 Full details (when they land): v0.16.0 Release

    Happy local LLM-ing! 🤖✨

    🔗 View Release

  • Ollama – v0.16.0-rc2

    Ollama – v0.16.0-rc2

    🚨 Ollama v0.16.0-rc2 is out! 🚨

    This release candidate is a light but tidy patch focused on API docs & stability—perfect for keeping your integrations humming. Here’s the lowdown:

    🔹 Fixed OpenAPI schema for two key endpoints:

    • `/api/ps` — now correctly documents the list running processes response
    • `/api/tags` — updated to reflect accurate model tag listing behavior

    ✅ Why it matters: If you’re using SDKs, auto-generated clients, or UI tools that rely on the OpenAPI spec (like Swagger), this ensures they’ll work exactly as expected.

    📦 Binaries for macOS, Linux & Windows are already up (2 assets).

    📅 Released by `sam18` on Feb 12 @ 01:37 UTC

    🔗 Commit: `f8dc7c9`

    No flashy new models or breaking changes—just solid polish for the upcoming v0.16.0! 🛠️

    Want a sneak peek at what’s actually new in v0.16 (beyond rc2)? Just say the word! 😄

    🔗 View Release

  • Ollama – v0.16.0-rc1

    Ollama – v0.16.0-rc1

    🚀 Ollama v0.16.0-rc1 is here!

    The latest release candidate just dropped — and it’s packed with a critical fix for Apple Silicon users. Here’s what’s new:

    🔹 Bug Fix: Non-MLX model loading restored

    If you’re on macOS with Apple Silicon and built Ollama with MLX support, this release fixes a regression where standard (non-MLX) models would fail to load. 🛠️

    → Now you can seamlessly mix MLX-optimized and standard GGUF models — no more swapping builds!

    💡 Why it matters:

    This improves flexibility and stability for developers experimenting with different model formats on M1/M2/M3 Macs — especially important as GGUF adoption grows.

    📌 Note: This is a release candidate (v0.16.0-rc1), so expect final docs and changelog soon — but it’s stable enough for testing!

    👉 Grab it on GitHub: github.com/ollama/ollama/releases

    Let us know how it runs! 🧪💻

    🔗 View Release

  • Ollama – v0.16.0-rc0

    Ollama – v0.16.0-rc0

    🚨 Ollama v0.16.0-rc0 is out — and it’s packing a sneaky but super important fix! 🚨

    🔥 What’s new?

    Bug fix for mixed-model loading: If you’ve ever built Ollama with Apple’s MLX (Metal Performance Shaders) support, you might’ve hit a wall trying to load non-MLX models (like CUDA or CPU-only ones). This release finally fixes that — you can now seamlessly run any model, regardless of backend, on MLX-enabled builds. 🧩💻

    🔍 Why it matters:

    • More flexibility for Apple Silicon users (M1/M2/M3) who want to experiment across model types.
    • Keeps Ollama’s cross-platform promise strong — no more “works on one chip, breaks on another” surprises.

    📦 Still missing?

    Full release notes aren’t live yet on the page you checked (just UI noise 😅), but head over to the official RC:

    👉 v0.16.0-rc0 Release

    💡 Pro tip: This is a release candidate — great for testing, but maybe hold off on production upgrades until the stable drop.

    Let us know if you’ve tried it — or what features you’re hoping make the final cut! 🧠✨

    🔗 View Release

  • ComfyUI – v0.13.0

    ComfyUI – v0.13.0

    🚨 ComfyUI v0.13.0 is live! 🚨

    The latest drop brings some serious quality-of-life upgrades and new toys for your AI art workflows. Here’s the lowdown:

    🔹 `LoadImageMask` Node Added

    Now you can load masks directly into your graph — no more juggling external files or workarounds for inpainting! 🎯

    🔹 Custom Nodes Just Got Smarter

    Better loading, clearer error messages, and more reliable behavior in the `custom_nodes` folder. Less frustration, more building! 🧩

    🔹 Faster Image Previews

    Large images? No problem. Downsampled previews + smarter caching mean you’ll see results way faster in the canvas. ⚡

    🔹 Node Library UI Overhaul

    Searching and managing nodes in the sidebar? Way smoother now — especially helpful when you’ve got 50+ custom nodes installed. 📚

    🔹 Bug Fixes & Stability Boosts

    Crashes, memory leaks, and workflow save issues? Fixed or significantly improved across Windows/macOS/Linux. 🛠️

    🔹 Experimental WebDAV Support

    Want to mount remote storage? Try it out — but keep backups handy! 🌐

    🔗 Grab it now: ComfyUI v0.13.0

    Let us know what you build with the new mask loader! 🖼️✨

    🔗 View Release

  • Deep-Live-Cam – 2.6

    Deep-Live-Cam – 2.6

    🚨 AI Enthusiasts—deepfake magic just got a serious upgrade! 🚨

    🔥 Deep-Live-Cam v2.6 “Power Update” is LIVE—and it’s packing serious firepower!

    Virtual Camera Support (OBS-ready!)

    → Stream live deepfakes directly to Twitch, YouTube, Zoom—you name it. Just select “Deep-Live-Cam” as your camera in OBS and boom, real-time magic on stream.

    Blazing-Fast Core Overhaul

    → Rendering, face swap, and video processing are significantly snappier—even the GitHub minified build got a turbo boost. Less waiting, more creating!

    🔄 New “Refresh” Button

    → No more restarting the app to grab a fresh camera feed. One click, and you’re back in action.

    🔧 Bug Fixes & Stability Tweaks

    → A cleaner, more reliable foundation—so future updates land even smoother.

    All in all: bigger, faster, and ready to stream! 🚀

    Grab it now—your next viral deepfake stream is just a click away. 😎

    🔗 View Release

  • Tater – Tater v55

    Tater – Tater v55

    🥔 Tater v55 – Memory & Evolution

    • Sticky memory
    • Stores structured facts, past setups, and outcomes—not just raw chat logs.
    • Recalls preferences, recognizes patterns, and avoids repeating mistakes for a truly continuous convo.
    • Smarter recall system
    • Remember important facts & prior actions
    • Look up previous results to skip re‑explaining
    • Reduce repetition → smoother workflow
    • Agent Lab (BETA) – Self‑building plugins
    • Create, update, or repair plugins directly from the lab.
    • If a capability is missing, Tater can build it; if something breaks, it can fix itself.
    • Self‑Improving loop
    • Diagnose issues → tweak experimental tools → re‑test → iterate until stable.
    • Performance boost & plugin cleanup
    • Phone alerts merged into Camera Events → fewer vision model calls.
    • Redundant AI calls trimmed, logic streamlined, reliability up across the board.

    Bottom line: Tater v55 isn’t just a configurable bot anymore—it’s an adaptive, memory‑rich assistant that can build and refine its own tools while running faster and more stable than ever. 🚀

    🔗 View Release