• Tater – Tater v79

    Tater – Tater v79

    🥔 Tater v79 — “Like It’s 2023 Again” 🎙️

    The local-native AI powerhouse Tater just dropped a massive update that transforms your hardware satellites from isolated voice boxes into a fully interconnected intercom system! If you’re running your own Hydra-powered stack via Ollama or LM Studio, this is the upgrade your smart home ecosystem has been waiting for.

    The Big Highlights:

    • Pick Your Fighter: You can now switch between `microWakeWord` and `openWakeWord` live on VoicePE, Satellite1, and S3Box firmware. No more painful reflashing or rebuilding—just tweak wake engines and audio settings directly from Live Entities!
    • Built-In openWakeWord Server: Tater now hosts its own local detector (`/api/openwakeword/detect`) with ONNX runtime support. If your remote OWW server goes down, your satellites can automatically fall back to `microWakeWord` for seamless reliability.
    • Intercom Mode: A total game-changer! Use your satellites as a true intercom system to move voice between rooms seamlessly. The satellites are now smart enough to coordinate sessions, ensuring no more “shouting matches” where every device tries to talk at once. 🗣️
    • Standalone OWW Server: For those who want a decoupled setup, a new standalone server is available with CPU/NVIDIA Docker support and easy GitHub Actions builds.

    Firmware & Management Upgrades:

    • Firmware v2.0.0: Major firmware stacks (VoicePE, Satellite1, S3Box Display) have been bumped to v2.0.0.
    • Proactive Updates: The Dashboard can now detect outdated devices and route you directly to the Firmware UI. You can update individual devices or hit “Update All” to keep your fleet current without any 2 AM surprises. 🛠️

    This release effectively moves Tater from a group of standalone gadgets to a unified, intelligent voice assistant ecosystem!

    🔗 View Release

  • Lemonade – v10.5.0

    Lemonade – v10.5.0

    🍋 Lemonade SDK v10.5.0 is officially here!

    If you’re obsessed with squeezing every bit of performance out of your local hardware, this update is a must-have for keeping your inference engines running smoothly and in sync with the latest drivers. 🚀

    What’s new in this release:

    • llama.cpp Engine Boost: The core engine has been bumped to build `b9174`. This brings updated compatibility tweaks and performance optimizations for running those heavy GGUF models locally.
    • AMD/ROCm Refresh: We’ve updated both `rocm-stable` (build `b9174`) and `rocm-nightly` (build `b126`). If you’re leveraging AMD hardware or the Ryzen AI series, this ensures your SDK stays perfectly aligned with the latest ROCm driver builds.

    Keep those local LLMs humming! 🛠️

    🔗 View Release

  • Tater – Tater v78

    Tater – Tater v78

    🥔 Tater v78 — “Tater Can Read the Room Now” 👀🎙️

    The local-native AI powerhouse just got a massive brain and sensory upgrade! Tater is moving beyond being a standalone assistant into a fully distributed, sensing environment that can coordinate across your entire network.

    What’s cooking in v78:

    • 🎙️ Multi-Satellite Voice & Remote Playback: Say goodbye to isolated devices. You can now talk into a satellite in the kitchen and have Tater’s response play through a Sonos speaker or another satellite elsewhere in the house. It’s seamless, cross-device audio coordination!
    • 😐 Emotion ID: Using SpeechBrain, Tater can now detect emotional tones, mood, and vocal energy. These “emotional hints” are fed directly into prompts, allowing Tater to adapt its personality to match your vibe. 🎭
    • 🖥️ S3Box Display Platform: New support for the ESP32-S3-BOX-3! Using LVGL UI, these tiny terminals can now show voice animations, sensor bubbles, and camera snapshots via a brand-new Display API.
    • 🧠 Revamped Dashboard & Awareness: The dashboard has a fresh Tater-orange glow-up. You can now monitor system health, scheduled briefs, and real-time camera snapshots in one centralized hub. 📊
    • ⚡ Infrastructure & Firmware Upgrades:
    • Easier Flashing: A new Firmware Manager makes browser/USB flashing much smoother (less “bricked” anxiety!).
    • Hugging Face Integration: Manage HF tokens and access gated/private models directly within the app.
    • Environment Core: Better support for Ecobee and Ecowitt sensors for smarter, temperature-aware automation.

    🛠️ Coming Soon: The groundwork is officially laid for custom wake words! We’re moving toward a future where you can train and select your own unique triggers directly within the firmware tools. 🚀

    🔗 View Release

  • Crankboy App – Release v2.0.2

    Crankboy App – Release v2.0.2

    Get your handhelds ready because CrankBoy just dropped a fresh patch! 🕹️

    If you haven’t tried this yet, it’s a high-performance Game Boy emulator specifically tuned for the Playdate console. It handles everything from full-speed emulation and automatic save states to downloading cover art and applying patches (.bps, .ips, .ups) on the fly. You can even bundle ROMs to launch them straight from your Playdate menu!

    What’s new in v2.0.2:

    • Stability Tune-up: This is a maintenance-focused release designed to smooth out wrinkles in the code. 🛠️
    • Bug Fixes: The update focuses on essential fixes to ensure much more reliable performance and stability during your emulation sessions.

    It’s a quick, essential polish to keep your retro gaming experience running buttery smooth! 🚀

    🔗 View Release

  • Micro Wake Word Trainer Apple Silicon – v4

    Micro Wake Word Trainer Apple Silicon – v4

    🥔 Tater Voice Trainer v4 — The “Real World” Update is here! 📡

    If you’ve been training wake words on your M1, M2, or M3 Mac for Home Assistant, get ready—this isn’t just a patch; it’s a complete overhaul that turns your workflow into a continuous, closed-loop system. No more guessing with synthetic data; you can now train your models using actual audio captured directly from your hardware!

    What’s New in v4:

    • Live Audio Pipeline: Stream real-world data (hits, false wakes, and “near misses”) directly from your VoicePE or Satellite devices into the trainer. This means training on your specific microphone and local background noise for much higher accuracy. 🎤
    • New Studio UI: A streamlined, tabbed interface makes management a breeze:
    • Trainer: Manage languages and sample counts.
    • Captured Audio: Instant review of incoming clips.
    • Samples: Easy management of positive and negative training sets.
    • Firmware: Build and flash directly from the UI.
    • Advanced Negative Training: You can now explicitly mark “false wakes” as negative samples. This is a huge win for reducing those annoying false positives in your smart home setup. 🧠
    • Integrated Firmware Workflow: The loop is officially closed! You can build and flash updated firmware for your devices directly from the Trainer UI, including support for OTA (Over-the-Air) flashing and live console logs. 🛠️

    This update transforms the tool from a simple script into a full-scale development environment. Build, capture, refine, and flash—all in one place!

    🔗 View Release

  • Text Generation Webui – v4.8

    Text Generation Webui – v4.8

    Text-generation-webui just got a major facelift! 🚀 If you’ve been looking for that “AUTOMATIC1111” experience for your local LLMs, v4.8 is a massive leap toward a true desktop-class application.

    UI & UX Refinements

    The chat composer has been redesigned with a taller input area and pinned action buttons—giving it a sleek, modern vibe similar to Gemini and DeepSeek. You’ll also notice smoother scroll animations when sending messages and extra breathing room for action buttons below your chat history.

    Desktop & Electron Upgrades

    • True Desktop App: New portable builds are available! Just download, unzip, and double-click to run—no setup headache required. 🖥️
    • Window Persistence: The app now remembers your window size and maximize state between launches.
    • Web Mode: Added a `–no-electron` flag if you prefer using the web UI in your browser instead of the desktop window.
    • Bug Squashing: Fixed several Electron issues, including log coloring on Windows and broken speculative decoding caused by upstream `llama.cpp` changes.

    Under the Hood & API

    • New Quant Support: The update includes `ik_llama.cpp`, a fork that introduces new quantization types for better efficiency. 🛠️
    • API Enhancements: Added support for list-format content within tool and assistant messages.
    • Dependency Updates: Both `llama.cpp` and `ik_llama.cpp` have been bumped to their latest versions.

    Pro-Tip for Updates: 💡

    Updating your portable install is now a breeze! Just extract the new version and move your `user_data` folder. Starting with version 4.0, you can even place `user_data` one level up (next to your install folder) so multiple versions of TextGen can share the same models and settings seamlessly.

    🔗 View Release

  • ComfyUI – v0.21.1

    ComfyUI – v0.21.1

    Get your workflows ready, because ComfyUI v0.21.1 just dropped, and it is packed with heavy-hitting node additions and some much-needed stability fixes! 🛠️

    If you haven’t dived into ComfyUI yet, it’s the ultimate node-based playground for Stable Diffusion. It lets you build complex, modular AI pipelines by simply connecting nodes—no coding required. Whether you’re upscaling, inpainting, or running full video generations, this is where the real magic happens.

    New Nodes & Model Support

    • Expanded Partner Nodes: Huge shoutout to @bigcat86 for bringing in `Flux2ImageNode` and `GrokImageEditNodeV1`. We also have new `ByteDanceSeedreamNodeV2` and an `OpenAI Image` node, both featuring `DynamicCombo` and `Autogrow` capabilities for smoother automation.
    • Claude Integration: You can now pull in the brainpower of Claude with the brand-new `Claude LLM` node! 🤖
    • HiDream-O1 Support: New support for `HiDream-O1-Image` is live, featuring optimizations to help memory usage on non-dynamic VRAM setups.
    • LoRA & 3D Enhancements: The engine now supports the `anima TE lora kohya` format, and `Save3D` has been upgraded to save vertex colors and textures.

    Core Fixes & Improvements

    • Precision Saving: Fixed a bug in `model_patcher` regarding how `safetensors` saves `fp8` data—this is a huge win for anyone working with quantized models! 💾
    • Video Stability: Improved alignment for multi-frame guides in `LTXV` mid-video processing to reduce that annoying jitter.
    • UI & Workflow Tweaks: “Create Video” has been moved to the Essentials tab for quicker access, and workflow templates have been bumped to v0.9.75.
    • Bug Squashing: Various fixes for directory creation during audio loading, WebSocket linting, and resolving `RuntimeError` issues in VOID.

    Time to update your environment and start experimenting with those new Flux and OpenAI nodes! 🚀

    🔗 View Release

  • Ollama – v0.25.0-rc0: ci: speed up release builds (#15982)

    Ollama – v0.25.0-rc0: ci: speed up release builds (#15982)

    Ollama v0.25.0-rc0 🚀

    If you’re running LLMs locally, you know Ollama is the go-to for getting models like Llama 3 and DeepSeek-R1 up and running with zero friction. This new release candidate is all about behind-the-scenes efficiency and making the engine run smoother!

    The big focus in this update is performance optimization for the development pipeline:

    • Faster Release Builds: The CI (Continuous Integration) process has been overhauled to speed up official releases, meaning updates hit your machine sooner.
    • Optimized Linux Steps: The build process for Linux has been deduplicated and streamlined for a cleaner installation flow.
    • Snappier Local Dev: These optimizations are designed to help speed up local developer builds, making it easier for the community to tinker and contribute 🛠️

    Whether you’re a dev building custom tools or just a hobbyist running models on your workstation, this update keeps the momentum moving fast!

    🔗 View Release

  • Ollama – v0.30.0-rc17

    Ollama – v0.30.0-rc17

    Ollama v0.30.0-rc17 🛠️

    If you’re running local LLMs, you know Ollama is the go-to for getting models like Llama 3 and DeepSeek-R1 up and running on your machine with zero friction. It’s the ultimate toolkit for anyone wanting to experiment with open-source models privately and locally.

    This latest release candidate is a focused maintenance update designed to keep things running smoothly under the hood!

    What’s new:

    • CI/CD Polish: The team has implemented several fixes for Continuous Integration (CI) and linting processes.

    While it might look like a small bump, these behind-the-scenes tweaks are crucial for stability and ensuring the codebase stays clean as more features roll out. It’s all about making sure those builds stay green and reliable! 🚀

    🔗 View Release

  • Ollama – v0.30.0-rc16

    Ollama – v0.30.0-rc16

    Ollama v0.30.0-rc16 🛠️

    If you’re running local LLMs, you know Ollama is the go-to for getting models like Llama 3 and DeepSeek-R1 up and running with zero friction. This latest release candidate is a focused update aimed at squeezing more efficiency out of your hardware!

    What’s new:

    • Batch Size Tuning: The big headline here is the ability to tune batch sizes. This is a huge win for anyone trying to optimize inference speed and squeeze every bit of performance out of their GPU or CPU setup. 🚀

    Fine-tuning these parameters can make a massive difference in throughput, especially when you’re experimenting with larger models on limited VRAM! Perfect for those of us pushing our local rigs to the limit.

    🔗 View Release