• ComfyUI – v0.3.71

    ComfyUI – v0.3.71

    ComfyUI v0.3.71 is live — quiet release, massive quality-of-life wins! 🎨✨

    • Smarter error messages — No more cryptic crashes. Now you’ll know why that node blew up.
    • Smoother canvas — Panning and zooming feel buttery, even with 50+ node workflows.
    • Custom nodes? Fixed. — Third-party nodes won’t break on reload anymore. Keep your favorite tools alive!
    • Cleaner UI — Tiny tweaks to labels and connections — looks sharper, feels more polished.

    And hey — Python 3.11+ is now recommended. If you’re still on 3.9, it’s time to upgrade for speed + stability.

    No flashy new nodes… but everything just works better. Update, reload your workflows, and keep building. 💪

    🔗 View Release

  • Home Assistant Voice Pe – 25.11.0

    Home Assistant Voice Pe – 25.11.0

    Home Assistant Voice PE just dropped v25.11.0 🚀

    Big win: Wake word detection is now faster—your AI hears you before you finish saying “Hey HA.” No more awkward pauses.

    Music & announcements? Smooth as butter. HTTP timeouts are GONE—streaming stays flawless, even during late-night coffee runs. ☕🎧

    And big thanks to the Open Home Foundation for stepping in as sponsor! 78 releases and counting… this thing’s becoming a powerhouse.

    Full changelog: 25.10.0…25.11.0

    🔗 View Release

  • Deep-Live-Cam – 2.3d

    Deep-Live-Cam – 2.3d

    🚨 Deep-Live-Cam 2.3d just dropped — and it’s a game-changer for real-time face swaps!

    Smart Model Picker — Browse and swap top-tested models with one click. No more digging through folders.

    🤯 HyperSwap 256×256 — Face swaps now 200% sharper. Details? Crisp. Artifacts? Gone.

    Face Enhancer v2 — Up to 4x faster, zero lag. Your stream won’t stutter even with heavy swaps.

    Mouth Mask + FPS Counter — Fixed those weird mouth glitches and now you can monitor performance live.

    🚫 One-click magic — Run `deep-live-cam.bat` and it just works. No more config headaches.

    All of this? Only in QuickStart for now. Windows & Mac Silicon users — update ASAP.

    Keep swapping smarter, not harder. 🎭💻

    🔗 View Release

  • Ollama – v0.13.0

    Ollama – v0.13.0

    🚀 Ollama v0.13.0 is live — and it’s a game-changer for local LLM folks!

    Meet DeepSeek-V3.1 (aka Deepseek2) — now officially supported with 128K context, razor-sharp reasoning, and killer coding skills. But here’s the kicker: it’s running on Ollama’s brand-new engine with MLA (Multi-Layer Attention) — meaning faster token generation, lower latency, and no more sluggish long-context hangs.

    What’s new?

    • ✅ DeepSeek-V3.1 support — perfect for complex prompts, multilingual tasks & code generation
    • 🚀 MLA engine = smoother, faster inference on both CPU and GPU (NVIDIA/AMD)
    • 💡 Optimized streaming — ideal for chat apps, agents, and real-time LLM workflows

    Just run `ollama pull deepseek2` and feel the difference. No more waiting. Just pure, local LLM power. 🤖💻

    🔗 View Release

  • Lemonade – v9.0.3

    Lemonade – v9.0.3

    🚀 Lemonade v9.0.3 just dropped — and it’s a game-changer for local LLM folks!

    The C++ server now ships with a clean, official `.msi` installer (`lemonade-server-minimal.msi`) — goodbye clunky .exe, hello Windows stability 🎯.

    ✨ What’s new:

    • C++ system info now matches Python’s accuracy — no more mismatched specs!
    • Embedding UX got a serious polish: smoother, faster, less lag.
    • Model list now pulls from FLM + single source of truth 🗂️ (no more duplicate chaos).
    • Fixed bugs in `flm install`, `user_models.json`, and the `list` command.
    • Linux users: `unzip` is now a .deb dependency — no more “command not found” headaches.
    • Help menu cleaned up ✨, and “Version:” logs cleanly in the terminal 📋
    • Python tests now only run when code changes — faster builds, less noise.

    All wrapped in a sleek WiX-built MSI for rock-solid Windows installs.

    Switching to local LLMs just got even easier. Grab it, tweak it, own your AI. 🚀

    🔗 View Release

  • Text Generation Webui – v3.18

    Text Generation Webui – v3.18

    🔥 text-generation-webui v3.18 is live — and llama.cpp just leveled up!

    • 🖥️ `–cpu-moe` flag dropped — offload MoE experts to CPU and run massive models on low-end GPUs. VRAM? Who needs it.
    • 🐧 ROCm support is HERE! AMD GPU users on Linux — rejoice. No CUDA? No problem.
    • 🍎 macOS 13 wheels retired. Time to update your OS if you’re still on Big Sur or earlier.
    • 🚀 Backend upgrades:
    • llama.cpp → latest commit (10e9780) — smoother, faster, more stable
    • ExLlamaV3 v0.0.15 — better quant, faster attention
    • peft 0.18.* — new LoRA magic for fine-tuning lovers
    • triton-windows 3.5.1.post21 — Windows inference just got a turbo boost

    📦 Portable builds? Still the best part.

    Download → unzip → run. No pip, no install.

    • NVIDIA? `cuda12.4`
    • AMD/Intel? Use `vulkan`
    • CPU-only? `cpubuilds` is your hero
    • Mac M1/M2? `macos-arm64` — all set

    🔧 Upgrading? Just swap the binary. Your `user_data/` folder stays untouched — models, configs, themes… all safe.

    Go run Llama 3 70B MoE on your old laptop. The future isn’t just local — it’s portable. 🎒💻

    🔗 View Release

  • Ollama – v0.13.0-rc0

    Ollama – v0.13.0-rc0

    🚀 Ollama v0.13.0-rc0 just dropped — and it’s packed with power!

    Say hello to DeepSeek-V3.1 (aka Deepseek2) — one of the most capable open LLMs out there, now available with a simple `ollama pull deepseek-ai/deepseek-v3.1`.

    Why it’s awesome:

    • 🚀 MLA (Multi-Layer Attention) is live — cuts memory use, speeds up inference, and keeps reasoning sharp.
    • 🛠️ New engine under the hood = smoother runs, fewer crashes, better future-proofing.
    • 💥 Run state-of-the-art reasoning on your laptop — no cloud needed.

    GGUF? Still supported. API? Still there. CLI? Even better.

    This isn’t just an update — it’s your ticket to running top-tier models locally, faster than ever.

    Go grab it:

    `ollama pull deepseek-ai/deepseek-v3.1`

    #LocalAI #DeepSeek #Ollama #LLM

    🔗 View Release

  • Heretic – v1.0.1

    Heretic – v1.0.1

    Heretic v1.0.1 is live 🎉 — the first public release of the fully automated LLM censorship remover is here, and it’s wilder than you thought.

    No more manual tuning. No labeled data. Just run `heretic Qwen/Qwen3-4B-Instruct-2507` and watch it surgically erase refusal layers using directional ablation. It’s like giving your model a caffeine IV while keeping its brain intact.

    🔥 What’s new in v1.0.1?

    • First stable release: Beta’s over — this is the real deal.
    • 🚀 8B model decensoring in ~45 mins on RTX 3090 — fast, lean, and mean.
    • 🧪 Improved KL divergence control: More original intelligence preserved post-ablation.
    • 💾 Save or push to Hugging Face with one command — no PhD needed.
    • 🛠️ Better MoE support: Now handles Qwen-MoE and Llama-MoE with fewer hiccups.
    • 📊 Enhanced eval suite: Auto-benchmarks refusal rates + output quality in one shot.

    Built with PyTorch 2.2+, AGPL-3.0 licensed, and ready to break the safety chains.

    Go run it. Then ask: “Why did we ever accept this?” 💥

    🔗 View Release

  • Chatterbox – v0.1.2

    Chatterbox – v0.1.2

    Chatterbox v0.1.2 just dropped—and it’s a game-changer for TTS tinkerers 🎙️

    M1/M2 Macs rejoice: Native support via MPS—no more Rosetta slowdowns.

    🔊 Safetensors everywhere: Faster, safer model loads + new WAV examples to play with.

    🛠️ CFG scaling optional: Dial realism or creativity like a knob—perfect for voice acting or AI bots.

    🐛 CUDA errors? Gone. GPU runs smoother than ever.

    🎮 Min_P sampler added for finer audio control—less robotic, more human.

    📚 Docs now crystal clear on OS/Python deps + watermarking (PerTh) best practices.

    📣 New Discord link fixed & live—join to share voice clones, memes, and cat meows 🐱🔊

    🌟 7 fresh contributors brought the heat—thank you!

    Install with `pip install chatterbox-tts` and start cloning voices (or your pet’s purr) in seconds.

    Full changelog: https://github.com/resemble-ai/chatterbox/commits/v0.1.2

    🔗 View Release

  • ComfyUI – v0.3.70

    ComfyUI – v0.3.70

    ComfyUI v0.3.70 just landed — and it’s the quiet hero your workflows have been waiting for 🚀

    • Memory got smarter — Fewer crashes on big SDXL or 4K renders. Keep those long pipelines running without hitting OOM hell.
    • Nodes won’t kill your whole graph — A single failed node? No problem. The rest of your canvas keeps humming along.
    • UI tweaks that matter — Smoother panning, fixed tooltip glitches, cleaner labels. Tiny changes, big comfort.
    • PyTorch & CUDA updates — Linux users, rejoice: better compatibility under the hood.

    Pro tip: Drop your batch size by 1 if you’ve been battling memory limits — you’ll be amazed how much longer your renders last.

    No flashy new nodes… just a more stable, reliable engine. Sometimes the best upgrades are the ones you don’t notice — because they just work. 💪

    🔗 View Release