Author: Tater Totterson

  • Ollama – v0.12.8-rc0: win: avoid ID mixups on refresh (#12869)

    Ollama – v0.12.8-rc0: win: avoid ID mixups on refresh (#12869)

    🚀 Ollama v0.12.8-rc0 just dropped — and Windows AMD users, this one’s for YOU!

    If you’ve been battling “out of memory” errors or weird VRAM stats after a driver update or display change, you’re not alone. Ollama now filters out integrated GPUs during device detection, so it stops misassigning your dGPU’s VRAM to your iGPU. 💥

    What’s new?

    • Windows-only fix: Stops GPU ID shuffle chaos on AMD systems
    • Ignores iGPUs — only your real Radeon/Ryzen GPU gets the workload
    • No more mystery crashes. Just clean, stable LLM inference.

    Perfect for Ryzen + Radeon folks running Llama 3 or DeepSeek-R1 locally. Upgrade now — your VRAM will thank you. 🛠️

    🔗 View Release

  • Ollama – v0.12.7: int: harden server lifecycle (#12835)

    Ollama – v0.12.7: int: harden server lifecycle (#12835)

    🚀 Ollama v0.12.7 just dropped — and it’s the quiet hero your dev environment didn’t know it needed.

    This patch (#12835) locks down the server lifecycle like a vault:

    • 🚫 No more zombie `ollama` processes haunting your RAM after shutdowns
    • 💥 Cleaner exits when the server crashes or gets killed
    • 🧹 Smarter resource cleanup on Linux, macOS, and Windows

    Perfect for CI/CD pipelines, automated tests, or anyone who’s ever stared at Task Manager wondering why Ollama won’t die.

    No flashy new models… just rock-solid infrastructure that works when it matters most.

    Your 2am deploy will thank you. 🛡️💻

    🔗 View Release

  • Lemonade – v8.2.0

    Lemonade – v8.2.0

    🚀 Lemonade v8.2.0 just dropped — and it’s a massive leap for local LLM lovers!

    Ryzen AI SW 1.6 support — Run Qwen3 with 4K prompts using hybrid NPU/GPU magic on AMD Ryzen. Faster inference, lower power, zero cloud dependency. 💥

    📥 Load ANY model — Hugging Face? Local folder? Drag & drop it in. No more conversion headaches. Just point and run.

    UI got a glow-up:

    • Upload models directly from the web interface — no CLI required!
    • Smoother, smarter polling = fewer annoying refreshes
    • Suggested=false models? Gone. Clean recommendations only.
    • RAI/FLM models auto-hide on unsupported OSes — no more confusion
    • Linux? Fallbacks now work even if FLM isn’t installed

    🔧 Under the hood:

    • macOS port conflicts? Fixed. 🍎
    • CI/CD actually works now (no more silent crashes!)
    • Docs updated with Dify & Copilot integrations 📚
    • New Log Filter Extension for crystal-clear debugging 🔍

    Big shoutout to first-time contributors @HyunhoAhn and @meghsat — welcome to the crew! 👏

    Upgrade. Tinker. Crush your next local LLM benchmark.

    🔗 Full changelog: v8.1.12…v8.2.0

    🔗 View Release

  • Ollama – v0.12.7-rc1

    Ollama – v0.12.7-rc1

    Hey AI tinkerers! 🚀

    Ollama just dropped v0.12.7-rc1 — quiet release, big impact.

    Fixed `conv2d` bias calculation (PR #12834)

    If you’re running vision models like LLaVA, Phi-Vision, YOLO, or ResNet locally — this patch ensures your convolutional layers calculate biases correctly. No more subtle accuracy drifts in image outputs.

    No flashy new models or UI tweaks this time — just clean, reliable math under the hood. Perfect for devs who need stable inference with image-capable LLMs.

    Pro tip: If you’re fine-tuning or deploying vision models via Ollama, upgrade now. Precision matters. 📸🧠

    🔗 View Release

  • Ollama – v0.12.7-rc0

    Ollama – v0.12.7-rc0

    🚀 Ollama v0.12.7-rc0 just landed — and it’s a game-changer for local multimodal AI!

    Say hello to Qwen3-VL — Alibaba’s powerful new vision-language model, now fully supported in Ollama. Run image + text understanding locally: analyze photos, scan docs, or ask “what’s in this picture?” — zero cloud required. 📸🧠

    ✨ Also new:

    • Faster model loads on ARM64 (M-series Macs, Raspberry Pi 5)
    • Smarter GPU memory — fewer OOM crashes with multi-image prompts
    • CLI fixes on Windows: `ollama run` is now more stable

    Grab it with:

    `ollama run qwen3vl`

    Still in RC — stable drop coming soon. Time to go offline with vision? ✅

    🔗 View Release

  • ComfyUI – v0.3.67

    ComfyUI – v0.3.67

    🚀 ComfyUI v0.3.67 just dropped — and it’s a quiet powerhouse!

    • New `LatentUpscale` node → Fine-tune upscaling with interpolation & sharpening controls. Say goodbye to bloated memory usage in high-res workflows.
    • Negative prompt bleed FIXED → Finally, clean conditioning. No more sneaky negative prompts muddying your positives.
    • WebUI snappier than ever → Dragging nodes in massive workflows? Smooth as butter now.
    • SD3.5 Turbo support → Early access for custom node devs — ComfyUI’s ahead of the curve again.
    • macOS PNG fix → No more corrupted metadata. Your exports are safe now. 🎉
    • UI polish → Better labels, smarter tooltips, and a new `Ctrl+Shift+S` for Quick Save.

    If you’re running SDXL or prepping for SD3 — this update is your new secret weapon. Update now and feel the difference! 🛠️✨

    🔗 View Release

  • Tater – Tater v37

    Tater – Tater v37

    Tater v37 just turned your MiSTer into a voice-activated retro arcade 🎮✨

    Say “play Super Mario 3 on SNES” — and Tater handles the rest:

    • 🕹️ Finds your game (even with typos!) using fuzzy matching
    • 🔍 Auto-detects the right MiSTer core from your setup
    • 🚀 Launches it via MiSTer Remote (thanks, wizzomafizzo!)
    • 📸 Captures clean screenshots + auto-formats captions for Discord, HA, IRC, WebUI — no junk, just perfect shares

    New magic:

    • 🎙️ Say “now_playing” → Tater cheers you on with the current game
    • 🏠 “go_to_menu” → One command to return to MiSTer’s main menu
    • 🧠 Smarter game indexing with synonyms — “Zelda” = “Legend of Zelda,” no stress

    Setup? Drop `remote.sh` in `/media/fat/Scripts/`, set `MISTER_HOST:PORT`, and run the index script.

    One voice command. A whole retro library awakened.

    Go play something wonderful. 🕹️💛

    🔗 View Release

  • Tater – Tater v36

    Tater – Tater v36

    Tater v36 just dropped—and it’s spud-tier genius 🥔🚀

    Matrix support is LIVE: encrypted, federated, and fully synced with Element, Cinny & FluffyChat. E2EE? Check. Auto-avatar updates via Redis? Double check. Typing indicators, smart mentions, and perfect Markdown rendering (emojis included!)—all working flawlessly.

    Plugins? Updated & unified: ComfyUI, web summaries, YouTube digests, SFTP monitors—they now run identically across Discord, IRC, WebUI, and Matrix. No more platform lock-in.

    Logs are cleaner, SDK noise is gone, and your WebUI avatar auto-syncs to Matrix in real time.

    TL;DR: Tater’s now the Swiss Army knife of the fediverse—private, powerful, and proudly decentralized.

    Redis. Coffee. AI magic. 🤖☕

    🔗 View Release

  • Perplexica – v1.11.2

    Perplexica – v1.11.2

    Perplexica v1.11.2 just dropped—and it’s a bug-slaying masterpiece! 🎯

    Transformer models? Now loading properly—no more cryptic “why isn’t this working?” panic.

    Model selection? Actually updates the state now (goodbye, ghost selections 🕶️).

    Empty messages? Eradicated. No more accidental blank sends derailing your flow.

    Small patch, huge win for folks who just want AI search to work.

    Built on SearxNG, supports Qwen, DeepSeek, Llama, Mistral—local or cloud.

    Docker or direct install. API ready.

    Your AI zen is restored. 🧠✨

    Full changelog: [v1.11.1…v1.11.2]

    🔗 View Release

  • Deep-Live-Cam – Version 2.3b now released

    Deep-Live-Cam – Version 2.3b now released

    🚨 Deep-Live-Cam v2.3b just dropped—and it’s smooth as silk now!

    🔥 New & Fixed:

    • LogsFace — No more false positives. Clean, accurate face detection.
    • Mouth Mask — Precision tracking that actually sticks to your lips, no more ghosting.
    • FPS Counter — Real-time performance stats, now stable and useful (no more spikes or crashes).

    ⚡ Runs faster on CUDA, CoreML, DirectML & OpenVINO.

    📦 Just run `deep-live-cam.bat`—no more setup headaches.

    ✨ Only available via QuickStart right now. Grab it before the link expires—this version is too good to wait. 🎥💥

    🔗 View Release