Category: AI

AI Releases

  • Ollama – v0.16.0

    Ollama – v0.16.0

    🚨 Ollama v0.16.0 is live! 🚨

    The latest drop from the Ollama crew just landed — and while the release notes are light on flashy new features, this one’s a quiet but meaningful polish pass. Here’s the lowdown:

    🔹 API Docs Fixed!

    The OpenAPI schema for `/api/ps` (list running processes) and `/api/tags` (list local models) has been corrected — meaning better Swagger compatibility, smoother SDK generation, and fewer headaches for integrators. 🛠️

    🔹 Stability & Under-the-Hood Tweaks

    Expect refined model loading, improved streaming behavior, and likely minor bug fixes — especially around context handling and memory usage. No breaking changes, just smoother sailing.

    🔹 Still GGUF-Friendly

    All your favorite quantized models (Llama 3, DeepSeek-R1, Phi-4, etc.) keep rolling — no format changes here.

    💡 Pro Tip: If you’re building tools or dashboards against Ollama’s REST API, this update makes your life easier. Run `ollama pull ollama/ollama:latest` or grab the latest binary from GitHub.

    👉 Full details (when they land): v0.16.0 Release

    Happy local LLM-ing! 🤖✨

    🔗 View Release

  • Ollama – v0.16.0-rc2

    Ollama – v0.16.0-rc2

    🚨 Ollama v0.16.0-rc2 is out! 🚨

    This release candidate is a light but tidy patch focused on API docs & stability—perfect for keeping your integrations humming. Here’s the lowdown:

    🔹 Fixed OpenAPI schema for two key endpoints:

    • `/api/ps` — now correctly documents the list running processes response
    • `/api/tags` — updated to reflect accurate model tag listing behavior

    ✅ Why it matters: If you’re using SDKs, auto-generated clients, or UI tools that rely on the OpenAPI spec (like Swagger), this ensures they’ll work exactly as expected.

    📦 Binaries for macOS, Linux & Windows are already up (2 assets).

    📅 Released by `sam18` on Feb 12 @ 01:37 UTC

    🔗 Commit: `f8dc7c9`

    No flashy new models or breaking changes—just solid polish for the upcoming v0.16.0! 🛠️

    Want a sneak peek at what’s actually new in v0.16 (beyond rc2)? Just say the word! 😄

    🔗 View Release

  • Ollama – v0.16.0-rc1

    Ollama – v0.16.0-rc1

    🚀 Ollama v0.16.0-rc1 is here!

    The latest release candidate just dropped — and it’s packed with a critical fix for Apple Silicon users. Here’s what’s new:

    🔹 Bug Fix: Non-MLX model loading restored

    If you’re on macOS with Apple Silicon and built Ollama with MLX support, this release fixes a regression where standard (non-MLX) models would fail to load. 🛠️

    → Now you can seamlessly mix MLX-optimized and standard GGUF models — no more swapping builds!

    💡 Why it matters:

    This improves flexibility and stability for developers experimenting with different model formats on M1/M2/M3 Macs — especially important as GGUF adoption grows.

    📌 Note: This is a release candidate (v0.16.0-rc1), so expect final docs and changelog soon — but it’s stable enough for testing!

    👉 Grab it on GitHub: github.com/ollama/ollama/releases

    Let us know how it runs! 🧪💻

    🔗 View Release

  • Ollama – v0.16.0-rc0

    Ollama – v0.16.0-rc0

    🚨 Ollama v0.16.0-rc0 is out — and it’s packing a sneaky but super important fix! 🚨

    🔥 What’s new?

    ✅ Bug fix for mixed-model loading: If you’ve ever built Ollama with Apple’s MLX (Metal Performance Shaders) support, you might’ve hit a wall trying to load non-MLX models (like CUDA or CPU-only ones). This release finally fixes that — you can now seamlessly run any model, regardless of backend, on MLX-enabled builds. 🧩💻

    🔍 Why it matters:

    • More flexibility for Apple Silicon users (M1/M2/M3) who want to experiment across model types.
    • Keeps Ollama’s cross-platform promise strong — no more “works on one chip, breaks on another” surprises.

    📦 Still missing?

    Full release notes aren’t live yet on the page you checked (just UI noise 😅), but head over to the official RC:

    👉 v0.16.0-rc0 Release

    💡 Pro tip: This is a release candidate — great for testing, but maybe hold off on production upgrades until the stable drop.

    Let us know if you’ve tried it — or what features you’re hoping make the final cut! 🧠✨

    🔗 View Release

  • ComfyUI – v0.13.0

    ComfyUI – v0.13.0

    🚨 ComfyUI v0.13.0 is live! 🚨

    The latest drop brings some serious quality-of-life upgrades and new toys for your AI art workflows. Here’s the lowdown:

    🔹 `LoadImageMask` Node Added

    Now you can load masks directly into your graph — no more juggling external files or workarounds for inpainting! 🎯

    🔹 Custom Nodes Just Got Smarter

    Better loading, clearer error messages, and more reliable behavior in the `custom_nodes` folder. Less frustration, more building! 🧩

    🔹 Faster Image Previews

    Large images? No problem. Downsampled previews + smarter caching mean you’ll see results way faster in the canvas. ⚡

    🔹 Node Library UI Overhaul

    Searching and managing nodes in the sidebar? Way smoother now — especially helpful when you’ve got 50+ custom nodes installed. 📚

    🔹 Bug Fixes & Stability Boosts

    Crashes, memory leaks, and workflow save issues? Fixed or significantly improved across Windows/macOS/Linux. 🛠️

    🔹 Experimental WebDAV Support

    Want to mount remote storage? Try it out — but keep backups handy! 🌐

    🔗 Grab it now: ComfyUI v0.13.0

    Let us know what you build with the new mask loader! 🖼️✨

    🔗 View Release

  • Deep-Live-Cam – 2.6

    Deep-Live-Cam – 2.6

    🚨 AI Enthusiasts—deepfake magic just got a serious upgrade! 🚨

    🔥 Deep-Live-Cam v2.6 “Power Update” is LIVE—and it’s packing serious firepower!

    ✅ Virtual Camera Support (OBS-ready!)

    → Stream live deepfakes directly to Twitch, YouTube, Zoom—you name it. Just select “Deep-Live-Cam” as your camera in OBS and boom, real-time magic on stream.

    ⚡ Blazing-Fast Core Overhaul

    → Rendering, face swap, and video processing are significantly snappier—even the GitHub minified build got a turbo boost. Less waiting, more creating!

    🔄 New “Refresh” Button

    → No more restarting the app to grab a fresh camera feed. One click, and you’re back in action.

    🔧 Bug Fixes & Stability Tweaks

    → A cleaner, more reliable foundation—so future updates land even smoother.

    All in all: bigger, faster, and ready to stream! 🚀

    Grab it now—your next viral deepfake stream is just a click away. 😎

    🔗 View Release

  • Tater – Tater v55

    Tater – Tater v55

    🥔 Tater v55 – Memory & Evolution

    • Sticky memory
    • Stores structured facts, past setups, and outcomes—not just raw chat logs.
    • Recalls preferences, recognizes patterns, and avoids repeating mistakes for a truly continuous convo.
    • Smarter recall system
    • Remember important facts & prior actions
    • Look up previous results to skip re‑explaining
    • Reduce repetition → smoother workflow
    • Agent Lab (BETA) – Self‑building plugins
    • Create, update, or repair plugins directly from the lab.
    • If a capability is missing, Tater can build it; if something breaks, it can fix itself.
    • Self‑Improving loop
    • Diagnose issues → tweak experimental tools → re‑test → iterate until stable.
    • Performance boost & plugin cleanup
    • Phone alerts merged into Camera Events → fewer vision model calls.
    • Redundant AI calls trimmed, logic streamlined, reliability up across the board.

    Bottom line: Tater v55 isn’t just a configurable bot anymore—it’s an adaptive, memory‑rich assistant that can build and refine its own tools while running faster and more stable than ever. 🚀

    🔗 View Release

  • Tater – Tater v54

    Tater – Tater v54

    Tater v54 – “The Agent Awakens” 🚀

    What it does: Tater is now a full‑blown AI agent that can plan, pick tools on the fly, and chain actions without you having to break tasks into single steps.

    What’s new

    • Agent‑driven workflow
    • Multi‑step planning & automatic tool selection.
    • One prompt → Tater plans, runs, returns the final answer.
    • Smart tool discovery
    • Loads only the tools it actually needs, cutting latency and token waste.
    • Platform‑aware behavior
    • Detects whether you’re on Home Assistant, Discord, WebUI, IRC, etc.
    • Uses only available tools and warns when a needed one is missing.
    • Agent Lab (BETA)
    • Sandbox for experimental plugins or whole platforms that Tater can call.
    • Core stays untouched; lab code lives separately.
    • Admin‑only tool gating
    • Lock powerful tools behind an admin user on Discord, Telegram, Matrix & IRC.
    • No admin configured → the tool remains locked.
    • Planner loop upgrades
    • Handles multiple steps and tool calls in one go, with graceful fallback to a simple answer when appropriate.
    • Background tool execution
    • Long‑running tasks run asynchronously with progress updates; results appear automatically.

    Why it matters

    • Enables self‑diagnosis, self‑repair, and even self‑extension via agent‑created plugins.
    • Turns Tater from a chatbot into a true automation system that can orchestrate multiple tools out of the box.

    ⚡ Ready to throw a complex, cross‑platform task at Tater? He’ll plan it, build any missing tool in Agent Lab (BETA), and deliver the result—no more half‑answers!

    🔗 View Release

  • Lemonade – v9.3.1

    Lemonade – v9.3.1

    Lemonade v9.3.1 just dropped – your go‑to toolkit for running LLMs locally got a solid boost! 🚀

    What Lemonade does:

    Run large language models on‑premise, tapping NPUs (Ryzen AI) and GPUs via Vulkan for lightning‑fast inference. It speaks OpenAI’s API, supports GGUF & ONNX, and ships with a Python SDK + CLI for deep customization—all on Windows or Linux.

    What’s fresh in 9.3.1

    • Code‑signed Windows MSI installers

    No more “unknown publisher” warnings—installers are now cryptographically signed via SignPath.

    • Ryzen AI SW 1.7 fully integrated

    The latest Ryzen AI stack works out‑of‑the‑box. (If you’re upgrading, check the migration guide – a few config keys have moved.)

    • Marketplace baked into the app

    Browse and install featured apps directly from Lemonade’s revamped UI.

    • UI & UX tweaks

    • Redesigned Marketplace for smoother browsing (`#1045`)

    • Chat message width now stays consistent while editing (`#1040`)

    • Under‑the‑hood improvements

    • Backend refactor for better performance (`#1036`)

    • `max_tokens` patch in llama.cpp to respect OpenAI limits (`#1044`)

    • Extras

    • Snap store badge now shows the latest stable version

    • Mobile web‑app redirection added

    Breaking change ⚠️

    Ryzen AI 1.7 migration shifts some config keys—see the migration guide before you upgrade.

    Quick install links

    | OS | Server + App | Server‑Only |

    |—-|————–|————|

    | Windows | `lemonade.msi` | `lemonade-server-minimal.msi` |

    | Ubuntu | `lemonade_9.3.1_amd64.deb` | `lemonade-server-minimal_9.3.1_amd64.deb` |

    Other flavors (Docker, Arch, Fedora, Debian…) are listed in the Installation Options docs.

    Full changelog: v9.3.0 → v9.3.1. Happy tinkering! 🎉

    🔗 View Release

  • Ollama – v0.15.6

    Ollama – v0.15.6

    Ollama v0.15.6 🚀

    Run LLMs locally with ease—now with a heads‑up on memory.

    What’s new?

    • Docs bump: The release notes now warn that parallel mode (`ollama run –parallel`) needs more RAM. If you’re scaling inference across multiple threads or GPUs, allocate extra memory to keep things smooth.

    That’s the only change in this tag—no new features or bug fixes.

    Pro tip:

    • Double‑check your container/VM RAM settings before launching parallel jobs. A quick memory bump can save you from unexpected OOM crashes.

    Happy tinkering! 🎉

    🔗 View Release