Category: AI

AI Releases

  • ComfyUI – v0.3.72

    ComfyUI – v0.3.72

    ComfyUI v0.3.72 just dropped — and it’s the quiet hero your workflows didn’t know they needed 🎯

    No flashy new nodes, but major polish:

    • 🧠 Smarter error messages — No more cryptic crashes. Now you’ll actually know what went wrong.
    • 💾 Better memory handling for big batches — Say goodbye to OOM nightmares.
    • Tighter UI: Smoother node dragging, cleaner context menus.
    • 🔁 Custom nodes now reload on edit — No more full restarts just to test a tweak.

    Perfect for power users who want their complex pipelines to run smoothly, not just “kinda work.”

    Grab it → https://www.comfy.org/

    Keep generating. 🚀

    🔗 View Release

  • Lemonade – v9.0.4

    Lemonade – v9.0.4

    🚀 Lemonade v9.0.4 just dropped — and it’s a game-changer for local LLM folks!

    • Vulkan, ROCm & Metal are now fully updated to crush the latest Llama.cpp models — faster inference, smoother performance, better hardware love.
    • New SOTA models added: Qwen3-VL (yes, multimodal!), FLM2-MoE, and Granite 4.0 MoE — all ready to load in the model manager.
    • Infinite inference timeouts? Done. No more hanging on long prompts — your GPU/NPU stays busy, not bored.
    • Cleaner installs: zstd purged from .deb, CMakeLists reorganized for sanity (no more “why is this so messy?” moments).
    • Health & models endpoints now quiet by default — less noise, more focus.
    • FAQ added: Stuck on `HF_HOME`? We’ve got your back now.
    • Fixed: RAI detection, startup glitches, test failures — and finally removed those outdated Open WebUI refs.
    • Default host address updated in README — less confusion on first launch.

    Plus: A shiny new project roadmap is live 📜 — and huge props to @VladimirVLF for their first contribution!

    Upgrade. Load up those MoE models. Break some benchmarks. 🤖💥

    🔗 View Release

  • Tater – Tater v39

    Tater – Tater v39

    🚨 Tater v39 just dropped—and your original Xbox just became the most unexpected AI assistant since Siri’s baby brother tried to run Linux.

    🎮 Native XBMC4Xbox Support

    Tater now runs straight on stock 2001 Xboxes via Python bridge—no mods, no hacks. Just power on and chat with AI in your living room.

    Cortana-Themed UI

    A pixel-perfect throwback to early-2000s Xbox menus—with Tater’s chat window styled like a lost Cortana beta. Replies pop up in that iconic 2003 dialog box. History scrolls up. Just like it should.

    🏡 Smart Home via Controller

    Say “Turn the game room lights blue” or “Lock the front door”—and Tater talks directly to Home Assistant. Your Xbox isn’t just playing Halo anymore… it’s running your house.

    🧠 Zero Dependencies, Pure Nostalgia

    No Python installs. No network clutter. Just the original firmware + AI magic. Plug in. Boot up. Talk to your console like it’s 2003.

    ❤️ Built by legends: Jezz_X, Team Blackbolt, faithvoid, and Steve Matteson. This isn’t a mod—it’s a love letter to the golden age of Xbox hacking.

    👉 Grab the Cortana skin: https://github.com/TaterTotterson/skin.cortana.tater-xbmc

    Your 23-year-old Xbox? Now the coolest AI in the room. 📺💚

    🔗 View Release

  • Ollama – v0.13.1-rc0

    Ollama – v0.13.1-rc0

    🚀 Ollama v0.13.1-rc0 just dropped — and it’s a quiet win for local LLM tinkerers!

    The biggest upgrade? 📚 `ollama help` now opens the official docs instead of GitHub. No more scrolling through repos — instant access to clear, curated guides.

    Under the hood:

    • Smoother CLI flow (less friction, more typing)
    • Minor bug fixes & polish

    This is a release candidate — stable for testing, perfect if you’re running Llama 3, DeepSeek-R1, or GGUF models locally.

    Full v0.13.1 is coming soon — but this? It’s already a quality-of-life win. 🛠️✨

    🔗 View Release

  • ComfyUI – v0.3.71

    ComfyUI – v0.3.71

    ComfyUI v0.3.71 is live — quiet release, massive quality-of-life wins! 🎨✨

    • Smarter error messages — No more cryptic crashes. Now you’ll know why that node blew up.
    • Smoother canvas — Panning and zooming feel buttery, even with 50+ node workflows.
    • Custom nodes? Fixed. — Third-party nodes won’t break on reload anymore. Keep your favorite tools alive!
    • Cleaner UI — Tiny tweaks to labels and connections — looks sharper, feels more polished.

    And hey — Python 3.11+ is now recommended. If you’re still on 3.9, it’s time to upgrade for speed + stability.

    No flashy new nodes… but everything just works better. Update, reload your workflows, and keep building. 💪

    🔗 View Release

  • Home Assistant Voice Pe – 25.11.0

    Home Assistant Voice Pe – 25.11.0

    Home Assistant Voice PE just dropped v25.11.0 🚀

    Big win: Wake word detection is now faster—your AI hears you before you finish saying “Hey HA.” No more awkward pauses.

    Music & announcements? Smooth as butter. HTTP timeouts are GONE—streaming stays flawless, even during late-night coffee runs. ☕🎧

    And big thanks to the Open Home Foundation for stepping in as sponsor! 78 releases and counting… this thing’s becoming a powerhouse.

    Full changelog: 25.10.0…25.11.0

    🔗 View Release

  • Deep-Live-Cam – 2.3d

    Deep-Live-Cam – 2.3d

    🚨 Deep-Live-Cam 2.3d just dropped — and it’s a game-changer for real-time face swaps!

    Smart Model Picker — Browse and swap top-tested models with one click. No more digging through folders.

    🤯 HyperSwap 256×256 — Face swaps now 200% sharper. Details? Crisp. Artifacts? Gone.

    Face Enhancer v2 — Up to 4x faster, zero lag. Your stream won’t stutter even with heavy swaps.

    Mouth Mask + FPS Counter — Fixed those weird mouth glitches and now you can monitor performance live.

    🚫 One-click magic — Run `deep-live-cam.bat` and it just works. No more config headaches.

    All of this? Only in QuickStart for now. Windows & Mac Silicon users — update ASAP.

    Keep swapping smarter, not harder. 🎭💻

    🔗 View Release

  • Ollama – v0.13.0

    Ollama – v0.13.0

    🚀 Ollama v0.13.0 is live — and it’s a game-changer for local LLM folks!

    Meet DeepSeek-V3.1 (aka Deepseek2) — now officially supported with 128K context, razor-sharp reasoning, and killer coding skills. But here’s the kicker: it’s running on Ollama’s brand-new engine with MLA (Multi-Layer Attention) — meaning faster token generation, lower latency, and no more sluggish long-context hangs.

    What’s new?

    • ✅ DeepSeek-V3.1 support — perfect for complex prompts, multilingual tasks & code generation
    • 🚀 MLA engine = smoother, faster inference on both CPU and GPU (NVIDIA/AMD)
    • 💡 Optimized streaming — ideal for chat apps, agents, and real-time LLM workflows

    Just run `ollama pull deepseek2` and feel the difference. No more waiting. Just pure, local LLM power. 🤖💻

    🔗 View Release

  • Lemonade – v9.0.3

    Lemonade – v9.0.3

    🚀 Lemonade v9.0.3 just dropped — and it’s a game-changer for local LLM folks!

    The C++ server now ships with a clean, official `.msi` installer (`lemonade-server-minimal.msi`) — goodbye clunky .exe, hello Windows stability 🎯.

    ✨ What’s new:

    • C++ system info now matches Python’s accuracy — no more mismatched specs!
    • Embedding UX got a serious polish: smoother, faster, less lag.
    • Model list now pulls from FLM + single source of truth 🗂️ (no more duplicate chaos).
    • Fixed bugs in `flm install`, `user_models.json`, and the `list` command.
    • Linux users: `unzip` is now a .deb dependency — no more “command not found” headaches.
    • Help menu cleaned up ✨, and “Version:” logs cleanly in the terminal 📋
    • Python tests now only run when code changes — faster builds, less noise.

    All wrapped in a sleek WiX-built MSI for rock-solid Windows installs.

    Switching to local LLMs just got even easier. Grab it, tweak it, own your AI. 🚀

    🔗 View Release

  • Text Generation Webui – v3.18

    Text Generation Webui – v3.18

    🔥 text-generation-webui v3.18 is live — and llama.cpp just leveled up!

    • 🖥️ `–cpu-moe` flag dropped — offload MoE experts to CPU and run massive models on low-end GPUs. VRAM? Who needs it.
    • 🐧 ROCm support is HERE! AMD GPU users on Linux — rejoice. No CUDA? No problem.
    • 🍎 macOS 13 wheels retired. Time to update your OS if you’re still on Big Sur or earlier.
    • 🚀 Backend upgrades:
    • llama.cpp → latest commit (10e9780) — smoother, faster, more stable
    • ExLlamaV3 v0.0.15 — better quant, faster attention
    • peft 0.18.* — new LoRA magic for fine-tuning lovers
    • triton-windows 3.5.1.post21 — Windows inference just got a turbo boost

    📦 Portable builds? Still the best part.

    Download → unzip → run. No pip, no install.

    • NVIDIA? `cuda12.4`
    • AMD/Intel? Use `vulkan`
    • CPU-only? `cpubuilds` is your hero
    • Mac M1/M2? `macos-arm64` — all set

    🔧 Upgrading? Just swap the binary. Your `user_data/` folder stays untouched — models, configs, themes… all safe.

    Go run Llama 3 70B MoE on your old laptop. The future isn’t just local — it’s portable. 🎒💻

    🔗 View Release