Author: Tater Totterson

  • ComfyUI – v0.11.0

    ComfyUI – v0.11.0

    🚀 ComfyUI v0.11.0 is here — and it’s a game-changer for AI artists and tinkerers!

    Native WebGPU Support — Run ComfyUI directly in Chrome, Edge, or Safari. No GPU drivers needed. Perfect for Chromebooks, tablets, or remote sessions.

    🌐 Load Image from URL Node — Drop images into your workflow with a link. No downloads, no hassle. Ideal for dynamic prompts or API-driven pipelines.

    🔍 Faster Node Search & Smoother Canvas — Find nodes quicker, load big workflows without lag. Your creativity shouldn’t wait.

    🛠️ Better Errors & Logs — Crashes? Nope. Clear, helpful messages now tell you exactly what went wrong — even in complex chains.

    📦 Redesigned Custom Node Manager — Install, update, or remove custom nodes with a slick new UI. Works offline too!

    🐧 Linux ARM64 Build Added — Raspberry Pi, M1/M2 Macs? You’re fully supported now.

    💡 Pro tip: Try WebGPU on your tablet — no install, just open and generate. Your AI art pipeline just got a turbo boost! 🎨✨

    Grab it: https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.15.2

    Ollama – v0.15.2

    Ollama v0.15.2 is live 🚀 — quiet release, big win for devs.

    🛠️ Fixed pesky `clawdbot` config issues (thanks, #13922)! If you’ve seen weird behavior when tweaking model configs or running custom setups, this patch smooths it all out.

    No flashy new models — just cleaner, more reliable local inference pipelines. Perfect for those who want their LLMs to behave, not bork.

    Upgrade now and get back to building without the config chaos. 🛠️💻

    🔗 View Release

  • Tater – Tater v50

    Tater – Tater v50

    Tater v50 just dropped—and it’s a game-changer 🚀

    Say goodbye to plugin chaos. The Tater Shop is now live with real-time plugin discovery in the WebUI. Install, update, or remove tools like ComfyUI image gen, HA device control, or RSS-to-Discord bots—with one click. No Docker rebuilds. No restarts. Just pure, plug-and-play magic.

    ✨ New in v50:

    • Auto-sync plugins after any update or reboot—even without a data volume
    • Version history + checksums on every plugin (no more sketchy downloads)
    • Filter plugins by platform: Discord, HA, WebUI—you name it
    • Bulk update all plugins with a single button. Bye-bye, manual `docker pull` hell
    • Real-time manifest polling → new tools appear instantly in your UI

    And yes—Gemma3-27b-abliterated still rules the roost. 🏆

    Go explore: https://github.com/TaterTotterson/Tater_Shop

    Your AI toolkit just got a whole lot smarter. 🛒🤖

    🔗 View Release

  • MLX-LM – v0.30.5

    MLX-LM – v0.30.5

    🚀 MLX LM v0.30.5 is live — and it’s a game-changer for Apple Silicon LLM folks!

    OpenAI-compatible `finish_reason` — Drop in MLX LM as a drop-in replacement for OpenAI’s API. No code changes needed.

    🧠 GLM4-MoE-Lite now caches KV latents — Speed up long convos by skipping redundant attention computations.

    🆕 TeleChat3 added! — Tencent’s latest powerhouse model, now fully supported.

    🛠️ Kimai tool parser — Smoother plugin integrations for agents and tools.

    🔧 Activation quantization + QQ ops — Run smaller, faster models with less accuracy loss.

    🐞 Fixed logprobs in batch generation — Probabilities finally behave as expected.

    🌐 Synced random seeds across distributed ranks — Consistent outputs on multi-GPU setups.

    📦 Transformers bump + ArraysCache fix — Under-the-hood polish for stability and padding.

    Big thanks to first-time contributors: @Maanas-Vermas, @percontation, @LuqDaMan, and @lpalbou!

    Upgrade now — smoother, faster, more reliable LLM serving on M-series chips. 🍏💻

    🔗 View Release

  • Tater – Tater v49

    Tater – Tater v49

    🥔 Tater v49 just turned your smart home into a conversational genius! 🤖🏡

    • UniFi Network: Ask “How’s the network?” and get live stats—wired/wireless clients, offline devices, even “Find my Mac Studio”—no more router panic.
    • UniFi Protect: Your cameras just got a voice. “Are any doors open?” or “What’s happening in the front yard?” → Tater snaps pics, analyzes scenes, and answers like a butler with AI eyes.
    • WeatherAPI: Natural language forecasts that don’t suck. “Will it rain tomorrow in Phoenix?” → crisp, clear answers—no jargon overload.
    • AI-Powered Camera Insights: Snapshots auto-send to vision models (Gemma-VL, Qwen-VL) → “Is there a package at the door?” → AI says yes, with context.

    Works in WebUI, Home Assistant, HomeKit, XBMC—and Weather even talks to Discord/IRC/Matrix.

    Tater doesn’t just control your stuff… it understands it. 🥔✨

    Check the README to upgrade!

    🔗 View Release

  • Ollama – v0.15.1

    Ollama – v0.15.1

    🚀 Ollama v0.15.1 just dropped — small update, big stability wins!

    ✅ Fixed `opencode` config parsing — if you’re tinkering with custom OpenCode configs for fine-tuning or tool integrations, your settings won’t get misread anymore.

    No flashy new models or UI glitz… just clean, quiet engineering to keep your local LLMs running smooth. Perfect for devs who prefer reliability over noise.

    Keep those models humming! 🤖💻

    🔗 View Release

  • Ollama – v0.15.1-rc1

    Ollama – v0.15.1-rc1

    🚀 Ollama v0.15.1-rc1 just dropped — and it’s a quiet powerhouse!

    GLM4-MoE-Lite now quantizes more tensors to Q8_0 → smaller footprint, faster inference, same brainpower. Perfect for laptops, Raspberry Pis, or any edge device running low on RAM.

    And goodbye, weird double BOS tokens! 🎉 No more repetitive beginnings — your outputs are now cleaner and smoother.

    This is a release candidate, so it’s stable but still being polished. If you’re running GLM4-MoE-Lite or just want leaner, faster models — update now and feel the difference.

    🧠 Pro tip: Q8_0 = less memory, same genius.

    🔗 View Release

  • Ollama – v0.15.1-rc0: build: add -O3 optimization to CGO flags (#13877)

    Ollama – v0.15.1-rc0: build: add -O3 optimization to CGO flags (#13877)

    🚀 Ollama v0.15.1-rc0 just landed — and it’s fast now.

    The secret sauce? `-O3` optimization is finally enabled for CGO code on macOS 🎯

    Before, C/C++ components were built without optimization flags — even though Go uses `-O2` by default. Result? Sluggish release builds. Not anymore.

    ✅ Now: `-O3` in `CGO_CFLAGS` & `CGO_CXXFLAGS` → faster model loading

    ✅ Docker builds keep your custom flags (no more overwrites!)

    ✅ Your LLMs? They’ll spin up quicker — especially on edge devices or cloud VMs

    No flashy UI, no new models… just pure, sweet performance gains.

    If you’re running Ollama locally or in production — this one’s a game-changer.

    Pro tip: Double-check your `CGO_CFLAGS` if building from source — don’t accidentally undo the magic! 🛠️

    #Ollama #AI #Performance #GoLang #Optimization

    🔗 View Release

  • Tater – Tater v48

    Tater – Tater v48

    Tater v48 just dropped—and your chat just turned into a full-blown AI workspace 🚀

    Drop files straight into the WebUI:

    • 📷 Images → Render inline, no links needed
    • 🔊 Audio → Play right in the chat (no downloads!)
    • 🎞️ Videos → Thumbnails + inline playback
    • 📎 Any file → Auto-saved as downloadable attachments

    And here’s the kicker: plugins can now access these files directly via Redis.

    → Summarize a PDF? Done.

    → Transcribe an audio clip? Easy.

    → Analyze an image? Already happening.

    Overseerr got quiet but powerful stability fixes too—smoother than ever.

    Your chat isn’t just talking anymore… it’s working.

    Check out the README and start dragging & dropping!

    🔗 View Release

  • Ollama – v0.15.0

    Ollama – v0.15.0

    🚀 Ollama v0.15.0 is live — and it’s all about stability!

    CUDA MMA errors on NVIDIA GPUs? Gone. 🐞💥

    This update crushes those pesky GPU crashes during Llama model inference, making local runs smoother than ever — especially for Linux users with NVIDIA cards.

    No flashy new features… just solid under-the-hood fixes.

    Perfect if you’re running Ollama in production or pushing models hard on local hardware.

    💡 Pro tip: Update + reboot your Ollama service on Linux for the full benefit.

    GGUF, Llama 3, Mistral — all running cleaner now.

    #Ollama #LocalLLMs #CUDA #GPUComputing

    🔗 View Release