• Ollama – v0.13.1-rc1: model: ministral w/ llama4 scaling (#13292)

    Ollama – v0.13.1-rc1: model: ministral w/ llama4 scaling (#13292)

    ๐Ÿš€ Ollama v0.13.1-rc1 just dropped โ€” and `ministral` is now a powerhouse!

    โœจ Llama 4-style RoPE scaling โ€” Ministralโ€™s context handling just got a turbo upgrade. Longer prompts? Smoother reasoning. No more stuttering at 8K+ tokens.

    ๐Ÿง  New parser for reasoning & tool calls โ€” Say goodbye to messy JSON parsing. Ministral now reliably outputs structured reasoning steps and function calls โ€” perfect for agents, RAG pipelines, or automation workflows.

    ๐Ÿ”ง Fixed Rope scaling in converter โ€” Under-the-hood fixes keep your models stable when scaling context windows. No more weird token drift.

    This isnโ€™t just a patch โ€” itโ€™s the quiet revolution local LLMs have been waiting for. If youโ€™re building agents or need clean tool calling, ministral just moved to the top of your list.

    Grab it: `ollama pull ministral` and watch your agents think smarter. ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.76

    ComfyUI – v0.3.76

    ComfyUI v0.3.76 is live ๐Ÿš€ โ€” quiet updates, massive stability wins!

    • ๐Ÿ”ง Fixed a nasty crash with malformed custom node inputs โ€” no more mid-generate shutdowns!
    • ๐Ÿ’พ Smarter memory handling for large batches, especially on low-VRAM GPUs.
    • ๐Ÿ–ฅ๏ธ Crispier node labels on 4K & high-DPI displays โ€” your canvas just got sharper.
    • ๐Ÿ“ฆ Updated Pillow & torch deps to squash security flags and boost compatibility.

    No flashy new nodes โ€” just a leaner, meaner, more reliable ComfyUI. If you run custom workflows or push high-res generations? Update now. ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release

  • Text Generation Webui – v3.19

    Text Generation Webui – v3.19

    ๐Ÿš€ Text Generation WebUI v3.19 just droppedโ€”and itโ€™s a game-changer for MoE lovers!

    Qwen3-Next is now fully supported in llama.cpp, with massive speed gains on both full GPU and hybrid CPU/GPU setups. Say goodbye to slow MoE inference!

    โœจ New features:

    • ๐ŸŽ›๏ธ –ubatch-size slider โ€” fine-tune batch performance like a pro
    • ๐Ÿš€ Optimized defaults for MoE efficiency out of the box

    ๐Ÿ”ง Backend upgrades:

    • llama.cpp updated to latest ggml-org (ff55414) โ†’ Qwen3-Next โœ…
    • ExLlamaV3 bumped to v0.0.16
    • coqui-tts now compatible with Transformers 4.55

    ๐Ÿ“ฆ PORTABLE BUILDS ARE LIVE!

    No install. No fuss. Just download, unzip, run:

    • NVIDIA โ†’ `cuda12.4`
    • AMD/Intel GPU โ†’ `vulkan`
    • CPU only โ†’ `cpu`
    • Apple Silicon Mac โ†’ `macos-arm64`

    ๐Ÿ’ก Upgrading?

    Grab the new zip โ†’ paste your old `user_data` folder in โ†’ all models, settings, and custom themes stay perfectly intact.

    Go break some MoE speed records. ๐Ÿค–๐Ÿ’ฅ

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.75

    ComfyUI – v0.3.75

    ComfyUI v0.3.75 is live ๐Ÿš€ โ€” quiet updates, massive quality-of-life wins!

    • โœ… Custom nodes now load reliably after restarts โ€” no more vanished tools or frantic re-downloads.
    • ๐Ÿ“ฆ Batch generation memory usage improved โ€” smoother sailing on weaker GPUs, less OOM rage.
    • ๐ŸŽจ UI tweaks: Node labels wrap smarter in cramped canvases, and your theme? Now remembered forever.
    • ๐Ÿ”’ Dependencies updated โ€” security clean-up, zero drama.

    No flashy new nodesโ€ฆ just stable, snappier, and more dependable than ever. If you run custom workflows or batch-generate โ€” upgrade now.

    Keep building, AI wizard. ๐Ÿ–Œ๏ธ๐Ÿค–

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.74

    ComfyUI – v0.3.74

    ComfyUI v0.3.74 is live ๐Ÿš€ โ€” quiet release, big fixes!

    • ๐Ÿ’ฅ Fixed critical crashes caused by malformed custom node inputs โ€” no more mid-generate shutdowns.
    • ๐Ÿง  Smarter memory management for heavy workflows, especially on low-VRAM systems.
    • โœจ UI tweaks: cleaner node labels at zoomed-out views + snappier canvas grid.
    • ๐Ÿ”’ Updated deps to patch security warnings โ€” because safe workflows = happy tinkerers.

    If youโ€™ve been battling instability with complex nodes or slow renders, this is your upgrade. No flashy featuresโ€ฆ just smoother, safer, more reliable AI art-making. ๐ŸŽจ๐Ÿ’ป

    Update now โ†’ https://www.comfy.org/

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.73

    ComfyUI – v0.3.73

    ComfyUI v0.3.73 is live! ๐ŸŽจโœจ

    This oneโ€™s all about stability and smooth sailing:

    • ๐Ÿ”ง Fixed a nasty crash caused by malformed custom node inputs โ€” no more mid-generate heart attacks.
    • ๐Ÿง  Better memory handling for big workflows, especially with high-res outputs or multiple models.
    • ๐Ÿ–Œ๏ธ UI tweaks: darker labels now pop, and drag-and-drop feels buttery in complex graphs.
    • ๐Ÿ“ฆ Updated Pillow & Torch deps for smoother installs on newer systems.

    No flashy features โ€” just fewer crashes, faster loads, and more time creating. Update now and keep those nodes humming! ๐Ÿ’ป๐Ÿš€

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.72

    ComfyUI – v0.3.72

    ComfyUI v0.3.72 just dropped โ€” and itโ€™s the quiet hero your workflows didnโ€™t know they needed ๐ŸŽฏ

    No flashy new nodes, but major polish:

    • ๐Ÿง  Smarter error messages โ€” No more cryptic crashes. Now youโ€™ll actually know what went wrong.
    • ๐Ÿ’พ Better memory handling for big batches โ€” Say goodbye to OOM nightmares.
    • โœจ Tighter UI: Smoother node dragging, cleaner context menus.
    • ๐Ÿ” Custom nodes now reload on edit โ€” No more full restarts just to test a tweak.

    Perfect for power users who want their complex pipelines to run smoothly, not just “kinda work.”

    Grab it โ†’ https://www.comfy.org/

    Keep generating. ๐Ÿš€

    ๐Ÿ”— View Release

  • Lemonade – v9.0.4

    Lemonade – v9.0.4

    ๐Ÿš€ Lemonade v9.0.4 just dropped โ€” and itโ€™s a game-changer for local LLM folks!

    • Vulkan, ROCm & Metal are now fully updated to crush the latest Llama.cpp models โ€” faster inference, smoother performance, better hardware love.
    • New SOTA models added: Qwen3-VL (yes, multimodal!), FLM2-MoE, and Granite 4.0 MoE โ€” all ready to load in the model manager.
    • Infinite inference timeouts? Done. No more hanging on long prompts โ€” your GPU/NPU stays busy, not bored.
    • Cleaner installs: zstd purged from .deb, CMakeLists reorganized for sanity (no more “why is this so messy?” moments).
    • Health & models endpoints now quiet by default โ€” less noise, more focus.
    • FAQ added: Stuck on `HF_HOME`? Weโ€™ve got your back now.
    • Fixed: RAI detection, startup glitches, test failures โ€” and finally removed those outdated Open WebUI refs.
    • Default host address updated in README โ€” less confusion on first launch.

    Plus: A shiny new project roadmap is live ๐Ÿ“œ โ€” and huge props to @VladimirVLF for their first contribution!

    Upgrade. Load up those MoE models. Break some benchmarks. ๐Ÿค–๐Ÿ’ฅ

    ๐Ÿ”— View Release

  • Tater – Tater v39

    Tater – Tater v39

    ๐Ÿšจ Tater v39 just droppedโ€”and your original Xbox just became the most unexpected AI assistant since Siriโ€™s baby brother tried to run Linux.

    ๐ŸŽฎ Native XBMC4Xbox Support

    Tater now runs straight on stock 2001 Xboxes via Python bridgeโ€”no mods, no hacks. Just power on and chat with AI in your living room.

    โœจ Cortana-Themed UI

    A pixel-perfect throwback to early-2000s Xbox menusโ€”with Taterโ€™s chat window styled like a lost Cortana beta. Replies pop up in that iconic 2003 dialog box. History scrolls up. Just like it should.

    ๐Ÿก Smart Home via Controller

    Say “Turn the game room lights blue” or “Lock the front door”โ€”and Tater talks directly to Home Assistant. Your Xbox isnโ€™t just playing Halo anymoreโ€ฆ itโ€™s running your house.

    ๐Ÿง  Zero Dependencies, Pure Nostalgia

    No Python installs. No network clutter. Just the original firmware + AI magic. Plug in. Boot up. Talk to your console like itโ€™s 2003.

    โค๏ธ Built by legends: Jezz_X, Team Blackbolt, faithvoid, and Steve Matteson. This isnโ€™t a modโ€”itโ€™s a love letter to the golden age of Xbox hacking.

    ๐Ÿ‘‰ Grab the Cortana skin: https://github.com/TaterTotterson/skin.cortana.tater-xbmc

    Your 23-year-old Xbox? Now the coolest AI in the room. ๐Ÿ“บ๐Ÿ’š

    ๐Ÿ”— View Release

  • Ollama – v0.13.1-rc0

    Ollama – v0.13.1-rc0

    ๐Ÿš€ Ollama v0.13.1-rc0 just dropped โ€” and itโ€™s a quiet win for local LLM tinkerers!

    The biggest upgrade? ๐Ÿ“š `ollama help` now opens the official docs instead of GitHub. No more scrolling through repos โ€” instant access to clear, curated guides.

    Under the hood:

    • Smoother CLI flow (less friction, more typing)
    • Minor bug fixes & polish

    This is a release candidate โ€” stable for testing, perfect if youโ€™re running Llama 3, DeepSeek-R1, or GGUF models locally.

    Full v0.13.1 is coming soon โ€” but this? Itโ€™s already a quality-of-life win. ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release