Category: AI

AI Releases

  • Tater – Tater v40

    Tater – Tater v40

    Tater v40 just dropped—and it’s not just an update, it’s a personality transplant 🤖💖

    🔥 Home Assistant Gets Smarter:

    • New Events Query Brief — ask “Any motion overnight?” and get a clean, JSON-ready summary.
    • Weather Query Brief — crisp 255-char snapshots perfect for dashboards, no more truncation hacks.

    🎛️ HA Integration Now Flawless:

    Auto-updating sensors, time-aware summaries for hourly polls, and zero clunky workarounds. Your smart home just got a brain upgrade.

    🎭 One Personality, Everywhere:

    Set `tater:personality` once—and it sticks across Discord, IRC, Matrix, WebUI, HomeKit… even XBMC on your Original Xbox. (Yes, really. Cortana mode optional.)

    💡 The magic? Tater now feels like it’s always been there—consistent, intuitive, and weirdly charming.

    Go ahead. Ask “Hey Tater, what’s the vibe today?”

    …your fridge might answer first. 🥔✨

    Check it out: https://github.com/TaterTotterson/Tater

    🔗 View Release

  • ComfyUI – ComfyUI version v0.4.0

    ComfyUI – ComfyUI version v0.4.0

    ComfyUI v0.4.0 just landed—and it’s a game-changer for workflow stability 🚀

    No more “why did my pipeline break?!” nightmares.

    Now:

    • Minor versions (v0.4.x) = rock-solid, tested releases from `master`
    • Patch versions (v0.4.1, v0.4.2) = critical bug fixes backported without forcing a full upgrade

    Think of it like Docker tags for AI workflows—clean, predictable, and dev-friendly.

    Your v0.4.x install? Safe to trust. Updates won’t wreck your nodes. Patches land fast.

    Perfect for artists, producers, and devs who just wanna render—not debug version chaos. 🎨✨

    Full details: https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.13.3-rc0

    Ollama – v0.13.3-rc0

    🚀 Ollama v0.13.3-rc0 just dropped — and Mac users, this one’s for you!

    Fixed a nasty Metal backend crash with Qwen2.5-VL during `argsort` ops — multimodal inference is now stable on Apple Silicon. 🍎🧠 No more mid-inference bailouts when describing images!

    Also tucked in:

    • Smoother vision-language pipeline performance
    • Tiny tensor handling optimizations under the hood

    No breaking changes — just cleaner, more reliable multimodal runs.

    Pro tip: Try `ollama run qwen2.5-vl “Describe this image”` — it’ll actually finish now 😉

    🔗 View Release

  • MLX-LM – test_data

    MLX-LM – test_data

    🚀 MLX LM just dropped 8 new optimized LLMs—fully tuned for Apple Silicon!

    Say hello to:

    • Qwen1.5-0.5B-Chat
    • Mistral-7B-v0.2 & v0.3
    • DeepSeek-Coder-V2-Lite-Instruct (MLX-native 🎯)
    • Phi-3.5-mini-instruct
    • Llama-3.2-1B-Instruct
    • Falcon3-7B-Instruct
    • Qwen3-4B

    ✅ All 4-bit quantized. ✅ Only `.safetensors`, tokenizer, and Jinja templates—zero bloat.

    ✅ New lean download for Qwen1.5-0.5B: just the model weights.

    ✅ Zipped and ready to drop into your MLX pipeline.

    No GPU? No problem. M-series chips are now LLM powerhouses.

    Grab `test_data.zip` and start whispering to LLMs at near-native speed. 🍏⚡

    🔗 View Release

  • Ollama – v0.13.2

    Ollama – v0.13.2

    Ollama v0.13.2 just landed — tiny patch, big win for docs! 🛠️

    ✅ Fixed a broken link in the README’s “Community Integrations” section — that sneaky “Swollama” typo is finally gone.

    Now you can click through to Swollama (the slick Ollama-powered web UI) without hitting a 404. Perfect for folks exploring local LLM interfaces without leaving their browser.

    Clean docs = smoother tinkering. Keep those models rolling! 💡🧠

    🔗 View Release

  • Crankboy App – v1.1.0

    Crankboy App – v1.1.0

    🚀 CrankBoy v1.1.0 just landed on Playdate — and it’s not just an update, it’s a full GB nostalgia upgrade!

    Unified file system: All ROMs & covers now auto-migrate to `/Shared/Emulation/gb` — no more folder chaos.

    🔊 Audio overhaul: Smoother, more accurate sound — that iconic Game Boy chime finally sounds right.

    🎨 Visual polish: Ghost frames + frame blending = buttery-smooth pixel motion. Your 8-bit dreams are now HD-ready.

    📥 In-app downloads: Fetch ROM hacks and patches directly from the emulator. No PC needed.

    💾 Save states that actually work: Now emulates cartridge memory like Pokémon — save anywhere, no more gym panic.

    🕹️ Crank customization: Tweak sensitivity and behavior to match your playstyle.

    🧩 Scripting support: Alleyway (beta) and Link’s Awakening fishing? Scriptable. Castlevania 2: Belmont’s Revenge? Fully playable now.

    ⚠️ Smart save warnings: No more accidental overwrites — we’ve got your back (and your 1998 save files).

    Full changelog on Patreon — but you’ll feel the magic the second you crank it up. Go play. Then come back and scream about how good it feels. 🕹️💙

    🔗 View Release

  • Crankboy App – v1.1.1

    Crankboy App – v1.1.1

    CrankBoy v1.1.1 just landed — and it’s the quiet hero your Playdate’s been waiting for 🎮💙

    Fixed a nasty startup crash on older Linux distros (Ubuntu 20.04, we see you). No more “why won’t it launch?!” — just instant GB/GBC nostalgia.

    Under the hood:

    • Smoother button responses with subtle UI polish
    • Security deps updated (your ROMs stay safe, no funny business 😉)
    • Better error logs = faster fixes thanks to sharp-eyed community reports

    No flashy features — just a rock-solid, buttery-smooth emulator so you can focus on the real magic: pixel-perfect gameplay and chiptune battles.

    Grab it. Boot up your favorite game. And let the retro vibes roll. 🕹️

    🔗 View Release

  • Text Generation Webui – v3.20

    Text Generation Webui – v3.20

    🎨 Image Generation is LIVE in Text-Generation-WebUI v3.20!

    Now generate images right inside your LLM UI with `diffusers` — Z-Image-Turbo supported, 4bit/8bit quantized, `torch.compile` optimized, and PNGs auto-stash your generation params. Gallery? Check. Live progress bar? Yep. OpenAI-compatible image API? Absolutely 🤖✨

    Faster text gen too!

    `flash_attention_2` is now ON by default for Transformers models — smoother, quicker responses.

    📦 Smaller Linux CUDA builds — download faster, run just as hard.

    🔧 llama.cpp updated to latest (0a540f9) + ExLlamaV3 v0.0.17 for better inference stability and speed.

    🖼️ Prompt magic upgrade!

    Pass `bos_token` and `eos_token` directly into Jinja2 templates — perfect for Seed-OSS-36B-Instruct and similar models.

    🚀 Portable builds now include:

    • NVIDIA: `cuda12.4`
    • AMD/Intel: `vulkan`
    • CPU only: `cpu`
    • Mac (Apple Silicon): `macos-arm64`

    💾 Updating? Just replace the app — keep your `user_data/` folder and all your models, LoRAs, and settings intact.

    Go make art. Or let the AI do it for you. 😎🖼️

    🔗 View Release

  • Ollama – v0.13.2-rc2: ggml: handle all streams (#13350)

    Ollama – v0.13.2-rc2: ggml: handle all streams (#13350)

    🚀 Ollama v0.13.2-rc2 just dropped — and it’s a quiet win for stability!

    The big fix? ggml now handles all GPU/CPU streams properly. No more leaked buffers or misaligned memory. Think of it as finally tidying up your AI workshop so every tensor has its place.

    ✨ Why you’ll care:

    • Smoother inference on multi-GPU setups
    • Fewer crashes during heavy async loads
    • Better memory cleanup = longer, happier sessions

    If you’ve been battling weird memory hiccups with Llama 3 or DeepSeek-R1 on Linux/macOS/Windows — this is your upgrade. Quiet change, huge impact. 💨

    Upgrade now and run like a champ.

    🔗 View Release

  • Lemonade – v9.0.8

    Lemonade – v9.0.8

    🚀 Lemonade v9.0.8 just dropped — and it’s a game-changer for local LLM folks!

    • FLM server hostname? Now configurable. No more fighting hardcoded defaults — deploy how you want. 🎯
    • Override `llama-server` path via env vars — perfect for custom builds, containers, or weird dev setups. 🛠️
    • CPU backend is LIVE! Run LLMs on CPU without GPU — ideal for dev, testing, or low-power machines. 🖥️
    • Debate Arena v2 is here! Smarter, smoother multi-model debates with better eval — test personalities like a pro. 💬🧠
    • Huge props to @bitgamm for their first contribution — welcome to the crew! 👏

    GGUF + ONNX? Check. OpenAI API compat? Check. Windows & Linux? Double check.

    Time to spin up your next local LLM experiment — faster, freer, and more flexible than ever. 🚀

    🔗 View Release