• ComfyUI – v0.13.0

    ComfyUI – v0.13.0

    🚨 ComfyUI v0.13.0 is live! 🚨

    The latest drop brings some serious quality-of-life upgrades and new toys for your AI art workflows. Here’s the lowdown:

    🔹 `LoadImageMask` Node Added

    Now you can load masks directly into your graph — no more juggling external files or workarounds for inpainting! 🎯

    🔹 Custom Nodes Just Got Smarter

    Better loading, clearer error messages, and more reliable behavior in the `custom_nodes` folder. Less frustration, more building! 🧩

    🔹 Faster Image Previews

    Large images? No problem. Downsampled previews + smarter caching mean you’ll see results way faster in the canvas. ⚡

    🔹 Node Library UI Overhaul

    Searching and managing nodes in the sidebar? Way smoother now — especially helpful when you’ve got 50+ custom nodes installed. 📚

    🔹 Bug Fixes & Stability Boosts

    Crashes, memory leaks, and workflow save issues? Fixed or significantly improved across Windows/macOS/Linux. 🛠️

    🔹 Experimental WebDAV Support

    Want to mount remote storage? Try it out — but keep backups handy! 🌐

    🔗 Grab it now: ComfyUI v0.13.0

    Let us know what you build with the new mask loader! 🖼️✨

    🔗 View Release

  • Deep-Live-Cam – 2.6

    Deep-Live-Cam – 2.6

    🚨 AI Enthusiasts—deepfake magic just got a serious upgrade! 🚨

    🔥 Deep-Live-Cam v2.6 “Power Update” is LIVE—and it’s packing serious firepower!

    Virtual Camera Support (OBS-ready!)

    → Stream live deepfakes directly to Twitch, YouTube, Zoom—you name it. Just select “Deep-Live-Cam” as your camera in OBS and boom, real-time magic on stream.

    Blazing-Fast Core Overhaul

    → Rendering, face swap, and video processing are significantly snappier—even the GitHub minified build got a turbo boost. Less waiting, more creating!

    🔄 New “Refresh” Button

    → No more restarting the app to grab a fresh camera feed. One click, and you’re back in action.

    🔧 Bug Fixes & Stability Tweaks

    → A cleaner, more reliable foundation—so future updates land even smoother.

    All in all: bigger, faster, and ready to stream! 🚀

    Grab it now—your next viral deepfake stream is just a click away. 😎

    🔗 View Release

  • Tater – Tater v55

    Tater – Tater v55

    🥔 Tater v55 – Memory & Evolution

    • Sticky memory
    • Stores structured facts, past setups, and outcomes—not just raw chat logs.
    • Recalls preferences, recognizes patterns, and avoids repeating mistakes for a truly continuous convo.
    • Smarter recall system
    • Remember important facts & prior actions
    • Look up previous results to skip re‑explaining
    • Reduce repetition → smoother workflow
    • Agent Lab (BETA) – Self‑building plugins
    • Create, update, or repair plugins directly from the lab.
    • If a capability is missing, Tater can build it; if something breaks, it can fix itself.
    • Self‑Improving loop
    • Diagnose issues → tweak experimental tools → re‑test → iterate until stable.
    • Performance boost & plugin cleanup
    • Phone alerts merged into Camera Events → fewer vision model calls.
    • Redundant AI calls trimmed, logic streamlined, reliability up across the board.

    Bottom line: Tater v55 isn’t just a configurable bot anymore—it’s an adaptive, memory‑rich assistant that can build and refine its own tools while running faster and more stable than ever. 🚀

    🔗 View Release

  • Tater – Tater v54

    Tater – Tater v54

    Tater v54 – “The Agent Awakens” 🚀

    What it does: Tater is now a full‑blown AI agent that can plan, pick tools on the fly, and chain actions without you having to break tasks into single steps.

    What’s new

    • Agent‑driven workflow
    • Multi‑step planning & automatic tool selection.
    • One prompt → Tater plans, runs, returns the final answer.
    • Smart tool discovery
    • Loads only the tools it actually needs, cutting latency and token waste.
    • Platform‑aware behavior
    • Detects whether you’re on Home Assistant, Discord, WebUI, IRC, etc.
    • Uses only available tools and warns when a needed one is missing.
    • Agent Lab (BETA)
    • Sandbox for experimental plugins or whole platforms that Tater can call.
    • Core stays untouched; lab code lives separately.
    • Admin‑only tool gating
    • Lock powerful tools behind an admin user on Discord, Telegram, Matrix & IRC.
    • No admin configured → the tool remains locked.
    • Planner loop upgrades
    • Handles multiple steps and tool calls in one go, with graceful fallback to a simple answer when appropriate.
    • Background tool execution
    • Long‑running tasks run asynchronously with progress updates; results appear automatically.

    Why it matters

    • Enables self‑diagnosis, self‑repair, and even self‑extension via agent‑created plugins.
    • Turns Tater from a chatbot into a true automation system that can orchestrate multiple tools out of the box.

    ⚡ Ready to throw a complex, cross‑platform task at Tater? He’ll plan it, build any missing tool in Agent Lab (BETA), and deliver the result—no more half‑answers!

    🔗 View Release

  • Lemonade – v9.3.1

    Lemonade – v9.3.1

    Lemonade v9.3.1 just dropped – your go‑to toolkit for running LLMs locally got a solid boost! 🚀

    What Lemonade does:

    Run large language models on‑premise, tapping NPUs (Ryzen AI) and GPUs via Vulkan for lightning‑fast inference. It speaks OpenAI’s API, supports GGUF & ONNX, and ships with a Python SDK + CLI for deep customization—all on Windows or Linux.

    What’s fresh in 9.3.1

    • Code‑signed Windows MSI installers

    No more “unknown publisher” warnings—installers are now cryptographically signed via SignPath.

    • Ryzen AI SW 1.7 fully integrated

    The latest Ryzen AI stack works out‑of‑the‑box. (If you’re upgrading, check the migration guide – a few config keys have moved.)

    • Marketplace baked into the app

    Browse and install featured apps directly from Lemonade’s revamped UI.

    • UI & UX tweaks

    • Redesigned Marketplace for smoother browsing (`#1045`)

    • Chat message width now stays consistent while editing (`#1040`)

    • Under‑the‑hood improvements

    • Backend refactor for better performance (`#1036`)

    • `max_tokens` patch in llama.cpp to respect OpenAI limits (`#1044`)

    • Extras

    • Snap store badge now shows the latest stable version

    • Mobile web‑app redirection added

    Breaking change ⚠️

    Ryzen AI 1.7 migration shifts some config keys—see the migration guide before you upgrade.

    Quick install links

    | OS | Server + App | Server‑Only |

    |—-|————–|————|

    | Windows | `lemonade.msi` | `lemonade-server-minimal.msi` |

    | Ubuntu | `lemonade_9.3.1_amd64.deb` | `lemonade-server-minimal_9.3.1_amd64.deb` |

    Other flavors (Docker, Arch, Fedora, Debian…) are listed in the Installation Options docs.

    Full changelog: v9.3.0 → v9.3.1. Happy tinkering! 🎉

    🔗 View Release

  • Ollama – v0.15.6

    Ollama – v0.15.6

    Ollama v0.15.6 🚀

    Run LLMs locally with ease—now with a heads‑up on memory.

    What’s new?

    • Docs bump: The release notes now warn that parallel mode (`ollama run –parallel`) needs more RAM. If you’re scaling inference across multiple threads or GPUs, allocate extra memory to keep things smooth.

    That’s the only change in this tag—no new features or bug fixes.

    Pro tip:

    • Double‑check your container/VM RAM settings before launching parallel jobs. A quick memory bump can save you from unexpected OOM crashes.

    Happy tinkering! 🎉

    🔗 View Release

  • Ollama – v0.15.5

    Ollama – v0.15.5

    Ollama v0.15.5 just dropped! 🎉

    What’s fresh:

    • Context‑limit flags for cloud models – New `–context-limit` (and related) CLI options let you cap token windows on hosted Ollama endpoints (OpenAI, Anthropic, etc.). Set it per model in `ollama.yaml` to avoid runaway memory use.
    • Sharper error handling – Cloud‑model failures now return a clear “context limit exceeded” message instead of vague timeouts, plus retry logic for flaky network hiccups.
    • Performance tweaks – ~10 % faster startup on popular cloud backends and slimmer CPU/GPU memory footprints in “lite” mode.
    • Bug fixes & housekeeping – Fixed a race condition that could corrupt logs during parallel jobs, refreshed the OpenAPI schema with the new context params, and added docs with CLI/config examples.

    🚀 Pro tip: Pin a sensible `–context-limit` (e.g., 4096) for large‑context LLMs in production to keep costs predictable and dodge OOM crashes.

    Happy tinkering! 🎈

    🔗 View Release

  • Ollama – v0.15.5-rc5

    Ollama – v0.15.5-rc5

    Ollama v0.15.5‑rc5 – fresh off the press! 🚀

    What’s the buzz?

    A lightweight framework for running LLMs (Llama 3, Gemma, Mistral, etc.) locally—now even smoother to spin up.

    New goodies in this release

    • Launch command overhaul

    `ollama launch` is faster, logs cleaner, and handles missing model files gracefully.

    • Sharper error messages

    When a model can’t be found or the GPU runtime fails, you’ll get actionable hints instead of cryptic dumps.

    • Cross‑platform tweaks

    Minor fixes for macOS ARM builds & Linux containers—no more “permission denied” crashes during startup.

    • Telemetry opt‑out flag

    Add `–no-telemetry` to suppress anonymous usage reporting. Perfect for privacy‑first setups or CI pipelines.

    • Dependency bump

    Updated protobuf & ggml libraries shave ~5 % off memory overhead for large models.

    • CLI consistency fixes

    `ollama run`, `ollama serve`, and `ollama pull` now share the same flag syntax (`–model`, `–port`, etc.), making scripting a breeze.

    > Tip: When automating model serving, sprinkle in `–no-telemetry` to keep your logs tidy and respect privacy.

    That’s it—speedier launches, clearer feedback, and a handful of quality‑of‑life tweaks for all you local‑LLM tinkers. Happy experimenting! 🎉

    🔗 View Release

  • Lemonade – v9.3.0: Hardcode `lib` for systemd service under $PREFIX (#1039)

    Lemonade – v9.3.0: Hardcode `lib` for systemd service under $PREFIX (#1039)

    Lemonade v9.3.0 – “Hardcode `lib` for systemd service under $PREFIX” 🍋

    What Lemonade does:

    Run LLMs locally on your PC with GPU/NPU acceleration, OpenAI‑compatible endpoints, and a handy Python SDK/CLI—perfect for privacy‑first devs.

    What’s new in v9.3.0

    • Fixed library path – the systemd daemon now hard‑codes the `lib` directory inside `$PREFIX`. No more “cannot locate lib” errors on Linux.
    • Closes Issue #1038 – eliminates the runtime crash that some users hit with custom install locations.
    • GPG‑signed release (`B5690EEEBB952194`) for added integrity verification.

    Why you’ll love it

    • 🎯 Drop the package, enable the service, and it just works—no fiddling with `LD_LIBRARY_PATH` or wrapper scripts.
    • 🚀 More reliable on CI/CD pipelines, edge devices, and any environment where env vars are volatile.

    Upgrade now: pull tag `v9.3.0` and enjoy a smoother, plug‑and‑play LLM serving experience!

    🔗 View Release

  • Tater – Tater v53

    Tater – Tater v53

    Tater v53 just dropped – fresh features, tighter integrations, and smoother ops! 🎉

    What’s new?

    • Plugin explosion: Hundreds of ready‑to‑go plugins now ship with Tater. From ComfyUI image generation to web page summarizers and Home Assistant device control, you can extend the assistant in a snap.
    • RSS Feed Watcher 2.0: Built‑in RSS monitoring lets you add, list, and manage feeds directly from chat. Push updates automatically to Discord, Telegram, or WordPress – perfect for staying on top of news without leaving your favorite platform. 📡
    • Telegram gets full treatment:
    • DM safety gate – set `IfAllowed DM User` to whitelist who can DM Tater; leave it empty and all DMs are ignored.
    • Shared queue – Telegram now shares the same media‑handling queue as Discord, Matrix, and IRC, so files flow without hiccups.
    • Discord‑style plugins work – `typing()`, `channel.send()`, `send_message`, and `ai_tasks` all route to Telegram channels just like they do on Discord.
    • Installation flexibility: Choose between a classic Python + Redis local setup or spin up the prebuilt Docker image for one‑click deployment.
    • Model recommendation: Gemma3‑27b‑abliterated is now flagged as the go‑to model for best performance out of the box.

    What’s still pending?

    `ftp_browser` and `webdav_browser` remain Discord‑only for now.

    Dive into the README to get started, and let your bots roam free! 🚀

    🔗 View Release