Author: Tater Totterson

  • Tater – Tater v37

    Tater – Tater v37

    Tater v37 just turned your MiSTer into a voice-activated retro arcade 🎮✨

    Say “play Super Mario 3 on SNES” — and Tater handles the rest:

    • 🕹️ Finds your game (even with typos!) using fuzzy matching
    • 🔍 Auto-detects the right MiSTer core from your setup
    • 🚀 Launches it via MiSTer Remote (thanks, wizzomafizzo!)
    • 📸 Captures clean screenshots + auto-formats captions for Discord, HA, IRC, WebUI — no junk, just perfect shares

    New magic:

    • 🎙️ Say “now_playing” → Tater cheers you on with the current game
    • 🏠 “go_to_menu” → One command to return to MiSTer’s main menu
    • 🧠 Smarter game indexing with synonyms — “Zelda” = “Legend of Zelda,” no stress

    Setup? Drop `remote.sh` in `/media/fat/Scripts/`, set `MISTER_HOST:PORT`, and run the index script.

    One voice command. A whole retro library awakened.

    Go play something wonderful. 🕹️💛

    🔗 View Release

  • Tater – Tater v36

    Tater – Tater v36

    Tater v36 just dropped—and it’s spud-tier genius 🥔🚀

    Matrix support is LIVE: encrypted, federated, and fully synced with Element, Cinny & FluffyChat. E2EE? Check. Auto-avatar updates via Redis? Double check. Typing indicators, smart mentions, and perfect Markdown rendering (emojis included!)—all working flawlessly.

    Plugins? Updated & unified: ComfyUI, web summaries, YouTube digests, SFTP monitors—they now run identically across Discord, IRC, WebUI, and Matrix. No more platform lock-in.

    Logs are cleaner, SDK noise is gone, and your WebUI avatar auto-syncs to Matrix in real time.

    TL;DR: Tater’s now the Swiss Army knife of the fediverse—private, powerful, and proudly decentralized.

    Redis. Coffee. AI magic. 🤖☕

    🔗 View Release

  • Perplexica – v1.11.2

    Perplexica – v1.11.2

    Perplexica v1.11.2 just dropped—and it’s a bug-slaying masterpiece! 🎯

    Transformer models? Now loading properly—no more cryptic “why isn’t this working?” panic.

    Model selection? Actually updates the state now (goodbye, ghost selections 🕶️).

    Empty messages? Eradicated. No more accidental blank sends derailing your flow.

    Small patch, huge win for folks who just want AI search to work.

    Built on SearxNG, supports Qwen, DeepSeek, Llama, Mistral—local or cloud.

    Docker or direct install. API ready.

    Your AI zen is restored. 🧠✨

    Full changelog: [v1.11.1…v1.11.2]

    🔗 View Release

  • Deep-Live-Cam – Version 2.3b now released

    Deep-Live-Cam – Version 2.3b now released

    🚨 Deep-Live-Cam v2.3b just dropped—and it’s smooth as silk now!

    🔥 New & Fixed:

    • LogsFace — No more false positives. Clean, accurate face detection.
    • Mouth Mask — Precision tracking that actually sticks to your lips, no more ghosting.
    • FPS Counter — Real-time performance stats, now stable and useful (no more spikes or crashes).

    ⚡ Runs faster on CUDA, CoreML, DirectML & OpenVINO.

    📦 Just run `deep-live-cam.bat`—no more setup headaches.

    ✨ Only available via QuickStart right now. Grab it before the link expires—this version is too good to wait. 🎥💥

    🔗 View Release

  • Perplexica – v1.11.1

    Perplexica – v1.11.1

    Perplexica v1.11.1 just dropped 🚀 — your open-source Perplexity alternative just got way smoother!

    • No more hanging searches — SearxNG timeouts? Fixed. Queries now actually finish instead of ghosting you.
    • Your go-to LLM (Qwen, DeepSeek, Llama, Mistral) remembers your pick — stored in localStorage. No more dropdown roulette on reload.
    • Run commands? Your data now persists via volumes. Say hello to saved outputs, goodbye to “oh no, I lost my last run.”

    Tiny release. Huge quality-of-life wins. Perfect for devs who just wanna search, not reconfigure. 🛠️✨

    🔗 View Release

  • Text Generation Webui – v3.16

    Text Generation Webui – v3.16

    🚀 Text Generation WebUI v3.16 just dropped—and it’s a game-changer for local LLM folks!

    New portable build via symlink? Yes, please. Devs juggling multiple setups can now switch models and configs without reinstalling. Big shoutout to @reksar! 🙌

    macOS Apple Silicon users—your day is saved. Python deps now work flawlessly on Tahoe (thanks @drieschel)! 🍎

    Backend upgrades? Oh yeah:

    • llama.cpp updated to latest GGML fork → now supports Llama-Mini-2.0 and Ring-Mini-2.0! Tiny but mighty models, unlocked.
    • ExLlamaV3 v0.0.11 = faster inference, smoother text flow.
    • Triton-Windows updated to 3.5.0.post21 → better CUDA perf on Windows rigs.

    Portable builds are now even easier:

    📥 Download → 📦 Unzip → 💾 Copy your old `user_data` folder in → ✅ All models, themes, and settings preserved. No pip. No venvs. Just AI magic.

    Pick your build:

    • NVIDIA? → `cuda12.4` (new) or `cuda11.7` (legacy)
    • AMD/Intel? → Use `vulkan`
    • CPU-only? → `cpu` build
    • Mac? → `macos-arm64` (M-series) or `macos-x86_64`

    No install. No fuss. Just drop-in, run, and chat with your LLMs like never before. 🚀

    🔗 View Release

  • Wyoming Openai – Streaming hotfix and Chatterbox TTS release (0.3.8)

    Wyoming Openai – Streaming hotfix and Chatterbox TTS release (0.3.8)

    🎙️ Wyoming OpenAI 0.3.8 is live—and TTS streaming just got a serious upgrade!

    Say goodbye to stilted audio pauses. The new smart TTS streaming uses pySBD to chunk text at sentence boundaries, then prefetches the next line while playing the current one—so even if OpenAI stumbles, your voice assistant keeps flowing.

    🚀 Highlights:

    • 🚀 Parallel prefetching: Up to 3 TTS requests running at once, sequenced perfectly.
    • 🐳 Chatterbox TTS support: Drop-in Docker compose for self-hosted neural voices—with voice cloning!
    • 🛡️ Robust error handling: New `TtsStreamError` + `_abort_synthesis` to kill broken streams and stop audio doubles.
    • 📦 Install via `pip install wyoming-openai`—no git needed.
    • 🔧 Updated deps: `openai==2.3.0`, `wyoming==1.8.0` for full compatibility.

    Perfect for Home Assistant users who want smooth, low-latency voice—whether on cloud APIs or local models like Piper, Kokoro, or Edge TTS.

    No more buffering. Just natural, uninterrupted speech. 🎧✨

    Check the docs and start streaming!

    🔗 View Release

  • Perplexica – v1.11.0

    Perplexica – v1.11.0

    🚀 Perplexica v1.11.0 just dropped — your open-source Perplexity AI alternative just got a massive upgrade!

    New Setup Wizard — No more config nightmares. Pick your model, pick your provider — done in 60 seconds.

    ⚙️ Config System Reborn — Live updates, hash-based tracking, and zero-loss migrations. Settings now survive reboots.

    🪄 Single Docker Install — `docker run …` and you’re running a full AI search engine. No repos, no deps. Pure magic.

    🧠 New Models Galore — GPT-5, Claude Opus 4.1, Gemini 2.5, O3… plus AIML API, LM Studio, and dynamic Transformers loading — models load on-demand, fast & lean.

    📱 UI/UX Glow-Up — Sleek sidebar, mobile settings button, weather widget with geolocation 🌡️, topic filters, preview mode, and file uploads + light theme finally working right.

    Dev Love — API validation, clean citations, instrumentation-based migrations (bye-bye ts-node), and faster message handling.

    🐛 Bugs Eaten Alive — Double JSON, iOS zoom chaos, DOC upload fails, light mode glitches, and that pesky “repeated first token” — all gone.

    👏 17 New Contributors — Huge props to @ClawCloud-Ron, @haddadr, @alckasoc, and the crew for making this release legendary.

    One command. Zero friction. All the power.

    Upgrade now — your next search just got smarter. 🚀

    🔗 View Release

  • ComfyUI – v0.3.66

    ComfyUI – v0.3.66

    ComfyUI v0.3.66 is live 🚀 — and it’s a quiet powerhouse for your AI workflows!

    New `LatentUpscale` node — upscale in latent space before decoding for sharper, cleaner high-res results with less noise.

    Memory optimized — fewer spikes during batch processing, perfect for mid-tier GPUs.

    🔍 Faster node search — partial matches work now! Type “upscale” and get all related nodes instantly.

    🧩 Custom node fix — no more vanishing nodes after reloads (we feel you 😅).

    🎨 UI polish — smoother transitions + zoom snapping to 25%/50%/100% for pixel-perfect control.

    Pro tip: Pair `LatentUpscale` with KSampler + High-Res Fix for insane detail without VRAM overload.

    Upgrade now — your next masterpiece is just a click away. 🖼️💻

    🔗 View Release

  • MLX-LM – v0.28.3

    MLX-LM – v0.28.3

    🔥 MLX LM v0.28.3 is LIVE! 🔥

    Heads up, Apple silicon LLM tinkerers – LLaMA-Factory just dropped a massive update for MLX LM! This release is packed with refinements and new features to help you build, train & serve even better models.

    Here’s the breakdown:

    • Memory Efficiency: State Space Models (SSM) are leaner now. 🙌

    MoE Magic: Lots of improvements to Mixture of Experts – LoRA fixes, bailing logic, and* a new LFM2 option!

    • Qwen3-VL Support: Visual language model support added with Qwen3-VL (plus a dense version!). 🖼️
    • Faster GPT2: Batch processing for GPT-2 just got quicker.
    • DWQ Tweaks: Depthwise Quantization refined with temperature adjustments.
    • Python 3.9 Love: Qwen3 support now extends to Python 3.9 users!
    • Plus: Cleaned up params, simplified I/O, CUDA install fixes, batched SSM masking, gradient accumulation, data parallel eval, Jamba support & LLM Benchmarks! 📊

    Dig into the full changelog – there’s a ton here to play with! 🎉

    🔗 View Release