• ComfyUI – v0.9.1

    ComfyUI – v0.9.1

    ComfyUI v0.9.1 is live — quiet update, massive quality-of-life wins! 🛠️🎨

    • Fixed pesky node crashes (image loading & custom nodes — no more mid-generate hangs!)
    • Smoother memory use on low-end GPUs — big win for folks running heavy workflows
    • Error messages now actually tell you why something failed (RIP “it broke”)
    • UI got a polish: cleaner labels + snappier canvas panning

    And悄悄… Apple Silicon M-series support is now partially live in the standalone Mac build! 🍏 Try it out and drop feedback — they’re listening.

    No flashy features… but if you’ve been battling instability, this is your upgrade. Update and get back to creating! 🚀

    🔗 View Release

  • Ollama – v0.14.0-rc8

    Ollama – v0.14.0-rc8

    🚀 Ollama v0.14.0-rc8 just dropped — and it’s all about speed!

    The image generation CLI now skips local model checks on startup. No more waiting for file validations if you’re running models remotely, in containers, or CI/CD pipelines.

    💡 Why you’ll love it:

    • Faster launches in cloud & headless environments
    • Less overhead, more generating
    • Perfect for devs who just want to run models — not debug file systems

    Still in RC, but this quiet polish is a game-changer for automation and remote workflows.

    More speed, more models, more magic coming soon… 🤖✨

    🔗 View Release

  • ComfyUI – v0.9.0

    ComfyUI – v0.9.0

    ComfyUI v0.9.0 just landed — and it’s like giving your workflow a turbo boost 🚀

    • Native WebSockets → Real-time node updates & buttery-smooth previews, even on mobile 📱
    • Load Image from URL → Drop any public image link in and start generating instantly — no downloads needed
    • Faster Node Search → Fuzzy matching + category filters = find your node in 2 seconds, not 20
    • Python 3.11+ Default → Speedier execution, cleaner installs, fewer dependency headaches
    • GPU Memory Optimized → Longer workflows? Less OOM crashes. More renders, less waiting

    Plus:

    ✅ Dark Mode toggle (finally!) 🌙

    ✅ 40+ bugs squashed (including the infamous “node disconnects after 5 mins” glitch)

    ✅ New community templates in the launcher — steal, tweak, and share!

    If you’re deep into custom pipelines, this isn’t just an update — it’s a workflow revolution.

    Grab it now → https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.14.0-rc7: scripts: increase notarization timeout to 20m (#13697)

    Ollama – v0.14.0-rc7: scripts: increase notarization timeout to 20m (#13697)

    Ollama v0.14.0-rc7 just dropped—and it’s a quiet win for M-series Mac users 🍏💻

    The big change? Notarization timeout bumped from 10 to 20 minutes. Why? Because that massive `100MB mlx.metallib` file (used for Apple Silicon ML acceleration) was crashing the process mid-check. Now? Smooth sailing. No more cryptic timeouts, just seamless installs for your M1/M2/M3 rigs.

    Still notarizing? You’re welcome. 😎

    Perfect for devs who want to run Llama 3, DeepSeek-R1, or Mistral locally without wrestling with macOS security hoops.

    #Ollama #LLMs #AppleSilicon #LocalAI

    🔗 View Release

  • Ollama – v0.14.0-rc6

    Ollama – v0.14.0-rc6

    Ollama v0.14.0-rc6 is here — quiet update, big win for devs! 🛠️

    CMake now uses `CMAKE_SYSTEM_PROCESSOR` instead of the deprecated `CMAKE_OSX_ARCHITECTURES`, so building from source on M1/M2 Macs just got way smoother. No more arch detection headaches — clean, future-proof, and cross-platform ready.

    Still a release candidate, but this is the kind of solid under-the-hood polish that makes local LLM running even more reliable. Model optimizations and API tweaks are rumored to drop in the final v0.14.0 soon.

    Keep your Ollama installs fresh — local AI just got a little more stable. 🚀

    #Ollama #LLM #DevTools

    🔗 View Release

  • Ollama – v0.14.0-rc5

    Ollama – v0.14.0-rc5

    🚀 Ollama v0.14.0-rc5 just dropped — and macOS users, your LLM game just got a serious upgrade!

    MLX Metal library bundled — Native GPU acceleration on Apple Silicon is now smooth and stable.

    🛠️ rpath fixes — No more “library not found” crashes. Ollama finally feels at home on Mac.

    📦 MLX support added — The foundation’s laid for blazing-fast, native Metal-powered inference on M-series chips.

    This isn’t just a patch — it’s the missing piece Mac users have been waiting for. Cleaner installs. Faster inference. Zero headaches.

    RC5 is likely the final step before v0.14.0 drops… time to update and feel the difference! 🍏💻

    🔗 View Release

  • Ollama – v0.14.0-rc4

    Ollama – v0.14.0-rc4

    🚀 Ollama v0.14.0-rc4 just dropped — and it’s fixing the annoying MLX build hiccups on macOS & Docker! 🖼️💻

    If you’ve been trying to run LLaVA or other vision models on Apple Silicon and kept hitting “MLX not found” errors? Say goodbye to the frustration. This patch nails the build scripts so MLX works reliably — no more wrestling with toolchains.

    ✅ What’s fixed:

    • MLX build scripts now work smoothly on macOS (M-series chips, rejoice!)
    • Dockerfile updated to bundle MLX deps properly for image gen in containers

    No flashy new features — just stable, reliable local image generation. Perfect for devs prepping for v0.14’s full launch. Keep those M-chips humming and start generating again! 🚀

    🔗 View Release

  • Ollama – v0.14.0-rc3

    Ollama – v0.14.0-rc3

    Ollama v0.14.0-rc3 just landed — and it’s got web smarts! 🌐

    Say goodbye to outdated answers. Now you can:

    • 🔍 Use `–web-search` to let your model hunt down live info on the fly
    • 📄 Use `–web-fetch` to pull content from any URL and feed it straight into your LLM

    Ask “What’s the latest on Mars rover discoveries?” — and Ollama actually checks. No more 2023 brain fog.

    Perfect for RAG pipelines, research bots, or just keeping your AI in the loop.

    Works on macOS, Windows, Linux — same slick CLI you already love.

    Still a release candidate, but this feels like the start of something wild.

    Keep your models curious. 🧠✨

    🔗 View Release

  • Tater – Tater v47

    Tater – Tater v47

    🥔 Tater v47 just dropped—and it’s alive with smarter voice convos! 🎤

    Continued Conversations

    Tater now senses when you speak and automatically reopens the mic after your reply ends—no more “Hey Tater” spam. It waits for silence, avoids cut-offs, and keeps the flow natural.

    🏠 Smart Room Awareness

    Say “turn the lights on” and Tater knows which room you’re in—no device prefixes needed. Works with any Voice PE naming style. Pure magic.

    🧠 Natural Flow, No Repetition

    Your context sticks around during a session. Conversations feel human—no robotic loops, just smooth back-and-forth.

    ⚙️ Under the Hood

    • Tighter idle detection
    • Per-session follow-up limits
    • Polished stability, fewer glitches

    This isn’t just an update—it’s your voice assistant finally getting you.

    Check the README to upgrade!

    🔗 View Release

  • Tater – Tater v46

    Tater – Tater v46

    🎙️ Tater v46 just dropped — and it’s finally listening like a human.

    No more “in the kitchen, please.” Say “turn on the lights” — and Tater knows you’re in the kitchen. 🏠

    Room-aware voice control? Check. Timers that follow your mic? Check. Audio auto-playing where you spoke? Double check.

    🔥 New in v46:

    • Room-aware voice control — Your device knows where you are. No config needed.
    • Voice PE timers = device-bound — Start a timer in the bathroom? It stays there.
    • Smart media routing — ComfyUI Audio Ace plays on the mic that triggered it.
    • Home Assistant upgrade — Now sends rich device + area context (update your HA agent to unlock it!).
    • Plugins? Still work. Backward-compatible, no drama.
    • Cleaner, faster, more natural — Voice feels less like a bot… and more like your roommate.

    Plug in, speak up, and let Tater handle the rest. 🐔✨

    Check it out: https://github.com/TaterTotterson/Tater

    🔗 View Release