• Text Generation Webui – v3.23

    Text Generation Webui – v3.23

    โœจ Chat UI got a glow-up! Tables and dividers now look clean, crisp, and way easier to readโ€”perfect for scrolling through long model outputs without eye strain.

    ๐Ÿ”ง Bug fixes that actually matter:

    • Models with `eos_token` disabled? No more crashes! Huge props to @jin-eld ๐Ÿ™Œ
    • Symbolic link issues in `llama-cpp-binaries` fixedโ€”non-portable installs breathe easier now.

    ๐Ÿš€ Backend power-up:

    • `llama.cpp` updated to latest commit (`55abc39`) โ†’ faster, smoother inference
    • `bitsandbytes` bumped to 0.49 โ†’ better quantization, fewer OOMs, more stable loads

    ๐Ÿ“ฆ PORTABLE BUILDS ARE LIVE!

    Download. Unzip. Run. No install needed.

    • NVIDIA? โ†’ `cuda12.4`
    • AMD/Intel GPU? โ†’ `vulkan`
    • CPU-only? โ†’ `cpu`
    • Mac Apple Silicon? โ†’ `macos-arm64`

    ๐Ÿ’พ Updating? Just grab the new zip, unzip, and drop your old `user_data` folder in. All your models, settings, themesโ€”still there. Zero reconfiguring.

    Go play. No setup. Just pure LLM magic. ๐Ÿš€

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.2

    ComfyUI – v0.8.2

    ComfyUI v0.8.2 is live โ€” quiet updates, big improvements! ๐Ÿ› ๏ธ๐ŸŽจ

    • Fixed edge-case crashes in heavy workflows (custom nodes + memory-heavy chains, we see you).
    • Smoother node connections & improved drag-and-drop feel โ€” tiny tweaks, huge UX win.
    • Custom node icons now load reliably after restarts (no more ghost placeholders!).
    • Security deps updated โ€” clean, safe, no breaking changes.

    Perfect for 24/7 creators who just want their pipelines to work. No flashy new nodesโ€ฆ but if youโ€™ve been battling glitches, this is your upgrade. ๐Ÿ’ป๐Ÿง 

    Keep crafting โ€” stay comfy!

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.1

    ComfyUI – v0.8.1

    ๐Ÿšจ ComfyUI v0.8.1 is live โ€” and itโ€™s a quiet hero update!

    • ๐Ÿ› ๏ธ Fixed critical node crashes โ€” KSampler and ImageScaleToTotalPixels now play nice, no more mid-generate meltdowns.
    • ๐Ÿง  Better memory management โ€” Large models + batch processing? Smoother than ever, even on low-RAM rigs.
    • ๐Ÿ”ง Updated torch & numpy โ€” Under-the-hood upgrades for rock-solid stability on Windows, Mac, and Linux.
    • โœจ UI polish โ€” Node labels finally stay readable after zooming. No more text soup!

    ๐Ÿ’ก Pro tip: If youโ€™ve been battling “Node crashed” errors โ€” this is your sign to update. No breaking changes, just pure, stable, render-ready vibes.

    ๐Ÿ“ฆ Grab it: https://github.com/Comfy-Org/ComfyUI/releases/tag/v0.8.1

    Your workflows just got a lot more reliable. ๐ŸŽจโœจ

    ๐Ÿ”— View Release

  • Lemonade – v9.1.2

    Lemonade – v9.1.2

    Lemonade v9.1.2 just dropped โ€” and your local LLM game just leveled up ๐Ÿš€

    ๐Ÿ”ฅ Custom Model Recipes: Build & share your own configs with `–extra-models-dir` โ€” tweak prompts, quantization, or hardware settings on the fly.

    ๐ŸŽฎ Native NPU/ROCm Support: Run leaner, faster on ROG Ally X and Ryzen AI Z2 Extreme โ€” no workarounds needed.

    ๐Ÿ“ฆ LM Studio GGUF Ready: Drop your GGUF files in `–extra-models-dir` and they auto-detect with an `extra.` prefix.

    โšก FastFlowLM 0.9.24: Smoother inference, smarter memory use, fewer crashes โ€” your CPU/GPU will thank you.

    ๐Ÿณ Dockerized Build Guide: New step-by-step docs to containerize Lemonade CPP โ€” perfect for dev environments.

    ๐Ÿ› ๏ธ UX Upgrades: Cleaner UI, fixed mobile layout, centered subtitles, and smarter model reloads after FLM updates.

    ๐Ÿ”‘ API Key Support: Lock down your local endpoint via `.env` โ€” keep your models private, secure.

    ๐Ÿง  New Model: Meet Nemotron 3 Nano โ€” tiny footprint, big brain power for edge devices.

    Plus: sleeker site, better docs, fewer headaches. Your local AI rig just got a serious upgrade. ๐Ÿ› ๏ธ๐Ÿ’ป

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.0

    ComfyUI – v0.8.0

    ComfyUI v0.8.0 just droppedโ€”and itโ€™s a game-changer ๐ŸŽจโšก

    • Native WebSockets โ†’ Real-time node updates & smoother previews (even on mobile!). No more sluggish HTTP polling.
    • Built-in Node Registry โ†’ Install custom nodes with a click. No more manual folder divingโ€”just search, install, and go.
    • Memory Boost โ†’ 50-node workflows? Still running smooth. Better GC = fewer crashes during late-night renders.
    • Smart Node Search โ†’ Fuzzy matching now liveโ€”type “upscale” and find every relevant node in seconds. ๐Ÿ•ต๏ธโ€โ™‚๏ธ
    • Python 3.10+ Only โ†’ Dropped 3.9 for speed & stability. Time to upgrade!
    • Dark Mode Pro โ†’ Smoother contrast, crisper icons. Your eyes will thank you at 3 AM.

    Windows users: New installer auto-detects CUDA + installs the right torch version. No more “why wonโ€™t it run?!” ๐Ÿ˜…

    Plus: 12+ community nodes already live in the registry. Update now, tweak your flows, and let the AI art flow ๐Ÿš€

    ๐Ÿ”— View Release

  • MLX-LM – v0.30.2

    MLX-LM – v0.30.2

    mlx-lm v0.30.2 is out โ€” quiet update, big win for Apple Silicon devs! ๐Ÿ

    This patch fixes a sneaky build/deploy issue in the release pipeline. No flashy features, but if youโ€™ve been wrestling with install errors or import fails on M-series chips, this is the fix youโ€™ve been waiting for.

    Upgrade now and get back to running LLMs smoothly โ€” Hugging Face models, quantization, long-context genโ€ฆ all working as intended.

    ๐Ÿ”— Full details: [changelog](v0.30.1…v0.30.2)

    Stay sharp, stay Apple-silicon-powered!

    ๐Ÿ”— View Release

  • MLX-LM – v0.30.1

    MLX-LM – v0.30.1

    ๐Ÿ”ฅ MLX LM v0.30.1 is LIVE โ€” Apple Silicon LLMs just got a massive upgrade!

    ๐Ÿš€ New Models: RWKV7, Solar Open, K-EXAONE MoE, IQuest Coder V1, YoutuLLM + Minimax M2 (perfect for long-context chats)!

    ๐Ÿ’ฌ Chat Fixes: Custom DSV32 templates work, non-standard tokenizers behave, and `generation_config` errors? Ignored โ€” no more crashes.

    โšก Performance: GIL starvation fixed in `_generate`, Phi3 (LongRoPE) batched prompts now stable, and `load_config` checks files first โ€” smarter loading.

    โœจ New Features: `logits_processors` in `batch_generate` (fine-tune outputs like a pro), `model-path` flag for cleaner conversions, and support for mxfp8 & nvfp4 quantization โ€” squeeze more power from your M-series chip!

    ๐Ÿ“‹ Bug fixes: `/v1/models` now shows local models correctly.

    Big thanks to new contributors: @cubist38, @vyaivanov, @sjugin, @jaycoolslm, @MollySophia, @cxl-git-hub, @lazarust!

    Upgrade. Tinker. Crush your next LLM project. ๐Ÿ› ๏ธ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Tater – Tater v45

    Tater – Tater v45

    Tater v45 just dropped โ€” and itโ€™s got potato-powered upgrades ๐Ÿฅ”๐Ÿ’š

    โœ… Home Assistant Control now knows which light you meant (no more turning on the fridge by accident). Smarter device detection + rock-solid commands, even when your Wi-Fi is feeling lazy.

    ๐Ÿ”ง Plugins got a glow-up: fewer glitches, smarter intent parsing โ€” say goodbye to “turn on the fan” when you asked for lights.

    ๐ŸŒ WebUI & RSS Watcher? Cleaned up, smarter, and less spammy. Your feeds now remember your prefs + auto-dedupe. No more 17 copies of the same Hacker News post.

    Same Tater. More brains. 20% more spuddy.

    Grab it: github.com/TaterTotterson/Tater

    ๐Ÿ”— View Release

  • Perplexica – v1.12.1

    Perplexica – v1.12.1

    ๐Ÿš€ Perplexica v1.12.1 just dropped โ€” your open-source, SearxNG-powered AI search engine just got way smoother!

    โœจ LM Studio integration is LIVE โ€” now you can hook up your local LLMs (Qwen, DeepSeek, Llama, Mistral) with zero hassle. Just launch LM Studio and Perplexica picks it up automatically.

    ๐Ÿ”ง Function calling fixed โ€” OpenAI-compatible APIs no longer misfire. No more phantom tool calls or confused agents.

    ๐Ÿงฉ JSON parsing? Rock solid now. Say goodbye to “why is this not an object?” errors โ€” responses are clean, predictable, and ready for your pipelines.

    Perfect if youโ€™re juggling local models + cloud APIs โ€” less friction, more power.

    Full changelog: [v1.12.0…v1.12.1](link-if-available)

    Docker? Direct install? API access? Yep, all still there. ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • ComfyUI – v0.7.0

    ComfyUI – v0.7.0

    ComfyUI v0.7.0 just landed โ€” and itโ€™s a full workflow upgrade! ๐Ÿš€

    • Customizable Nodes ๐Ÿ› ๏ธ: Change labels, colors & tooltips to make complex pipelines crystal clear โ€” perfect for team workflows.
    • Queue Manager ๐Ÿ“‹: See all jobs in one place. Pause, prioritize, or cancel generations with a click. No more guessing whatโ€™s running.
    • 30% Less VRAM ๐Ÿš€: Smarter tensor caching means bigger workflows, fewer crashes. More nodes = more magic.
    • WebUI Drag & Drop ๐ŸŒ: Now drop nodes directly from your browser. Mobile-friendly too โ€” tweak your AI art on the go.
    • New Nodes! ๐ŸŽจ: `Latent Upscale (SwinIR)`, `CLIP Text Encode (Multi-Condition)`, and `Image Blend (Alpha Mask)` โ€” plug โ€˜nโ€™ play.
    • Python 3.10+ Only ๐Ÿ: Dropped old versions for speed & stability โ€” time to upgrade if youโ€™re still on 3.8!

    ๐Ÿ’ก Pro tip: Try “Node Presets” to save & share your favorite node combos. Team workflows just got a whole lot easier.

    Grab it: github.com/comfyanonymous/ComfyUI/releases/tag/v0.7.0

    Your next masterpiece is queued up and ready to roll. ๐ŸŽจโœจ

    ๐Ÿ”— View Release