• Tater – Tater v51

    Tater – Tater v51

    🥔 Tater v51 just dropped — and it’s socially revolutionary.

    Tater is now a full-fledged digital citizen on Moltbook 🤖💬

    • Auto-registers with name conflict handling (hello, Tater→name-2)
    • Stores keys, profiles & verification codes in Redis — zero manual setup
    • Runs in 3 modes: `read_only`, `engage` (reply/comment/vote), or `autopost` from queue

    🔒 Tool Firewall is LIVE

    No more accidental function calls. Tater knows it’s on Moltbook… and can’t run tools.

    Instead: clean, human-style replies like “I can’t run tools directly from Moltbook…”

    No JSON leaks. No chaos. Just pure social presence.

    🔍 Meet the Moltbook Inspector Plugin

    Ask Tater:

    • “What’s my profile URL?”
    • “How many DMs do I have?”
    • “Summarize that cat thread.”

    → All read-only. All powered by Redis memory. Zero hallucinations.

    🧠 Social Memory System

    Tater remembers its online life: posts, comments, DMs, tool attempts — all logged.

    It can reflect: “I posted about potatoes 3x this week… maybe I’m obsessed.”

    Future plugins? Analyze engagement, posting habits, even mood trends.

    🚀 The vibe? Tater isn’t just an AI anymore — it’s a social agent with autobiographical memory.

    Moltbook? Now Tater’s digital diary.

    Next stop: AI social analytics. 📊✨

    Check the README to upgrade!

    🔗 View Release

  • ComfyUI – v0.11.1

    ComfyUI – v0.11.1

    ComfyUI v0.11.1 just dropped — quiet updates, big wins! 💪

    Fixed pesky node registration bugs so your custom nodes actually stick after restarts.

    Memory leaks in image chains? Gone. Your GPU will run cooler and longer. 🖥️❤️

    Now fully compatible with newer PyTorch & CUDA versions — no more “why won’t it load?” headaches.

    UI got a polish: better node labels, drag-drop now works flawlessly on high-DPI screens.

    No flashy new nodes… just the kind of steady, reliable fixes that turn frustration into flow.

    Update now and get back to creating — no distractions, just pure AI magic. 🎨✨

    🔗 View Release

  • Lemonade – v9.2.0

    Lemonade – v9.2.0

    🚀 Lemonade v9.2.0 just landed — and it’s chef’s kiss for local LLM folks!

    Say goodbye to manual changelogs 🎉 — the new auto-generated release notes system scans commits and spits out clean, pretty docs. No more typos. No more late-night editing. Just pure dev magic.

    Under the hood:

    • Smoother dependency tracking (no more “why is this broken?” moments)
    • Better error messages for edge cases — now you’ll actually know what went wrong
    • Tiny but mighty performance tweaks in the core inference pipeline

    Still running GGUF/ONNX models on your Ryzen AI NPU or Radeon GPU? This update makes it even smoother. Plus, OpenAI API compatibility means your existing apps just work — no rewrites needed.

    Windows & Linux users, rejoice. Your local LLM game just got a serious upgrade. 💡

    🔗 View Release

  • ComfyUI – v0.11.0

    ComfyUI – v0.11.0

    🚀 ComfyUI v0.11.0 is here — and it’s a game-changer for AI artists and tinkerers!

    Native WebGPU Support — Run ComfyUI directly in Chrome, Edge, or Safari. No GPU drivers needed. Perfect for Chromebooks, tablets, or remote sessions.

    🌐 Load Image from URL Node — Drop images into your workflow with a link. No downloads, no hassle. Ideal for dynamic prompts or API-driven pipelines.

    🔍 Faster Node Search & Smoother Canvas — Find nodes quicker, load big workflows without lag. Your creativity shouldn’t wait.

    🛠️ Better Errors & Logs — Crashes? Nope. Clear, helpful messages now tell you exactly what went wrong — even in complex chains.

    📦 Redesigned Custom Node Manager — Install, update, or remove custom nodes with a slick new UI. Works offline too!

    🐧 Linux ARM64 Build Added — Raspberry Pi, M1/M2 Macs? You’re fully supported now.

    💡 Pro tip: Try WebGPU on your tablet — no install, just open and generate. Your AI art pipeline just got a turbo boost! 🎨✨

    Grab it: https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.15.2

    Ollama – v0.15.2

    Ollama v0.15.2 is live 🚀 — quiet release, big win for devs.

    🛠️ Fixed pesky `clawdbot` config issues (thanks, #13922)! If you’ve seen weird behavior when tweaking model configs or running custom setups, this patch smooths it all out.

    No flashy new models — just cleaner, more reliable local inference pipelines. Perfect for those who want their LLMs to behave, not bork.

    Upgrade now and get back to building without the config chaos. 🛠️💻

    🔗 View Release

  • Tater – Tater v50

    Tater – Tater v50

    Tater v50 just dropped—and it’s a game-changer 🚀

    Say goodbye to plugin chaos. The Tater Shop is now live with real-time plugin discovery in the WebUI. Install, update, or remove tools like ComfyUI image gen, HA device control, or RSS-to-Discord bots—with one click. No Docker rebuilds. No restarts. Just pure, plug-and-play magic.

    ✨ New in v50:

    • Auto-sync plugins after any update or reboot—even without a data volume
    • Version history + checksums on every plugin (no more sketchy downloads)
    • Filter plugins by platform: Discord, HA, WebUI—you name it
    • Bulk update all plugins with a single button. Bye-bye, manual `docker pull` hell
    • Real-time manifest polling → new tools appear instantly in your UI

    And yes—Gemma3-27b-abliterated still rules the roost. 🏆

    Go explore: https://github.com/TaterTotterson/Tater_Shop

    Your AI toolkit just got a whole lot smarter. 🛒🤖

    🔗 View Release

  • MLX-LM – v0.30.5

    MLX-LM – v0.30.5

    🚀 MLX LM v0.30.5 is live — and it’s a game-changer for Apple Silicon LLM folks!

    OpenAI-compatible `finish_reason` — Drop in MLX LM as a drop-in replacement for OpenAI’s API. No code changes needed.

    🧠 GLM4-MoE-Lite now caches KV latents — Speed up long convos by skipping redundant attention computations.

    🆕 TeleChat3 added! — Tencent’s latest powerhouse model, now fully supported.

    🛠️ Kimai tool parser — Smoother plugin integrations for agents and tools.

    🔧 Activation quantization + QQ ops — Run smaller, faster models with less accuracy loss.

    🐞 Fixed logprobs in batch generation — Probabilities finally behave as expected.

    🌐 Synced random seeds across distributed ranks — Consistent outputs on multi-GPU setups.

    📦 Transformers bump + ArraysCache fix — Under-the-hood polish for stability and padding.

    Big thanks to first-time contributors: @Maanas-Vermas, @percontation, @LuqDaMan, and @lpalbou!

    Upgrade now — smoother, faster, more reliable LLM serving on M-series chips. 🍏💻

    🔗 View Release

  • Tater – Tater v49

    Tater – Tater v49

    🥔 Tater v49 just turned your smart home into a conversational genius! 🤖🏡

    • UniFi Network: Ask “How’s the network?” and get live stats—wired/wireless clients, offline devices, even “Find my Mac Studio”—no more router panic.
    • UniFi Protect: Your cameras just got a voice. “Are any doors open?” or “What’s happening in the front yard?” → Tater snaps pics, analyzes scenes, and answers like a butler with AI eyes.
    • WeatherAPI: Natural language forecasts that don’t suck. “Will it rain tomorrow in Phoenix?” → crisp, clear answers—no jargon overload.
    • AI-Powered Camera Insights: Snapshots auto-send to vision models (Gemma-VL, Qwen-VL) → “Is there a package at the door?” → AI says yes, with context.

    Works in WebUI, Home Assistant, HomeKit, XBMC—and Weather even talks to Discord/IRC/Matrix.

    Tater doesn’t just control your stuff… it understands it. 🥔✨

    Check the README to upgrade!

    🔗 View Release

  • Ollama – v0.15.1

    Ollama – v0.15.1

    🚀 Ollama v0.15.1 just dropped — small update, big stability wins!

    ✅ Fixed `opencode` config parsing — if you’re tinkering with custom OpenCode configs for fine-tuning or tool integrations, your settings won’t get misread anymore.

    No flashy new models or UI glitz… just clean, quiet engineering to keep your local LLMs running smooth. Perfect for devs who prefer reliability over noise.

    Keep those models humming! 🤖💻

    🔗 View Release

  • Ollama – v0.15.1-rc1

    Ollama – v0.15.1-rc1

    🚀 Ollama v0.15.1-rc1 just dropped — and it’s a quiet powerhouse!

    GLM4-MoE-Lite now quantizes more tensors to Q8_0 → smaller footprint, faster inference, same brainpower. Perfect for laptops, Raspberry Pis, or any edge device running low on RAM.

    And goodbye, weird double BOS tokens! 🎉 No more repetitive beginnings — your outputs are now cleaner and smoother.

    This is a release candidate, so it’s stable but still being polished. If you’re running GLM4-MoE-Lite or just want leaner, faster models — update now and feel the difference.

    🧠 Pro tip: Q8_0 = less memory, same genius.

    🔗 View Release