Category: AI

AI Releases

  • Ollama – v0.14.1: scripts: fix macOS auto-update signature verification failure (#13713)

    Ollama – v0.14.1: scripts: fix macOS auto-update signature verification failure (#13713)

    Ollama v0.14.1 just dropped — no flashy new models, but a silent hero fix for Mac users 🍎✨

    Turns out, those sneaky `._mlx.metallib` files (Apple’s hidden resource forks) were wrecking auto-updates by breaking code signature validation.

    The fix? Simple but slick: `ditto –norsrc` now strips out all those `._*` junk files before zipping the macOS release.

    Result? Smoother, trust-worthy auto-updates — no more “invalid signature” headaches.

    If you’re on Mac and Ollama’s been acting up during updates… this is your quiet win. 🚀

    No new features. Just a cleaner, more reliable experience. Perfect for devs who’d rather code than debug installer ghosts.

    🔗 View Release

  • Ollama – v0.14.0

    Ollama – v0.14.0

    Ollama v0.14.0 is live 🚀 and Mac users, this one’s for you!

    Apple Silicon just got a whole lot smoother — OpenBLAS is now bundled with the MLX backend. No more `brew install openblas` headaches. Just install, pull your favorite model (Llama 3? Mistral?), and go — faster inference, zero config.

    ✨ New in v0.14.0:

    • ✅ OpenBLAS built right into MLX — seamless setup on M-series chips
    • 🚀 Speed boost for local inference (yes, really)
    • 🔧 Cleaner dev experience for MLX-powered models

    Whether you’re running fine-tuned LLMs or just tinkering, Ollama keeps getting better at making local AI feel like magic.

    `ollama pull llama3` — and let the local AI party begin 🤖💻

    🔗 View Release

  • Ollama – v0.14.0-rc11

    Ollama – v0.14.0-rc11

    🚀 Ollama v0.14.0-rc11 just dropped—and Apple Silicon users, this one’s for you! 🍏⚡

    MLX now ships with OpenBLAS built-in, so inference on M-series Macs is smoother, faster, and actually plug-and-play. No more dependency hell—just `ollama run llama3` and go.

    Also in this build:

    • Smaller, leaner macOS packages
    • Fewer “why isn’t this working?” crashes
    • Stability creeping toward final release

    Perfect for devs running local LLMs on Mac—quietly powerful, seriously convenient. 🛠️💻

    Final v0.14 is coming… and it’s gonna be good.

    🔗 View Release

  • Ollama – v0.14.0-rc10

    Ollama – v0.14.0-rc10

    🚀 Ollama v0.14.0-rc10 just dropped — and it’s a quiet powerhouse for GPU users!

    CUDA library deduplication is now live 🎯

    No more bloated binaries. No more waiting for massive .tar.gz files to unpack.

    NVIDIA GPU folks on Linux/Windows: your SSDs will thank you.

    Clean, fast, efficient — this is the kind of under-the-hood polish that makes local LLMs feel seamless.

    No flashy new models this round… but the foundation just got stronger.

    v0.14.0 is almost here — keep those GPUs warm! 🔥

    #Ollama #AI #LLM #GPU #DevTools

    🔗 View Release

  • Ollama – v0.14.0-rc9

    Ollama – v0.14.0-rc9

    🚀 Ollama v0.14.0-rc9 just dropped — and it’s all about silent power for Apple Silicon users! 🍏💻

    The big fix? MLX components are now actually included in the macOS build. No more missing pieces — if you’re running Llama 3, Gemma, or Mistral on your M-series Mac, inference is smoother, faster, and fully optimized.

    No flashy new features this round — just clean, reliable polish.

    This is the quiet before the storm: v0.14 is so close, and RC9 is your green light to update and test.

    Perfect for devs who want rock-solid local LLMs before the big launch.

    Update now — your M-chip will thank you. 🛠️

    🔗 View Release

  • ComfyUI – v0.9.1

    ComfyUI – v0.9.1

    ComfyUI v0.9.1 is live — quiet update, massive quality-of-life wins! 🛠️🎨

    • Fixed pesky node crashes (image loading & custom nodes — no more mid-generate hangs!)
    • Smoother memory use on low-end GPUs — big win for folks running heavy workflows
    • Error messages now actually tell you why something failed (RIP “it broke”)
    • UI got a polish: cleaner labels + snappier canvas panning

    And悄悄… Apple Silicon M-series support is now partially live in the standalone Mac build! 🍏 Try it out and drop feedback — they’re listening.

    No flashy features… but if you’ve been battling instability, this is your upgrade. Update and get back to creating! 🚀

    🔗 View Release

  • Ollama – v0.14.0-rc8

    Ollama – v0.14.0-rc8

    🚀 Ollama v0.14.0-rc8 just dropped — and it’s all about speed!

    The image generation CLI now skips local model checks on startup. No more waiting for file validations if you’re running models remotely, in containers, or CI/CD pipelines.

    💡 Why you’ll love it:

    • Faster launches in cloud & headless environments
    • Less overhead, more generating
    • Perfect for devs who just want to run models — not debug file systems

    Still in RC, but this quiet polish is a game-changer for automation and remote workflows.

    More speed, more models, more magic coming soon… 🤖✨

    🔗 View Release

  • ComfyUI – v0.9.0

    ComfyUI – v0.9.0

    ComfyUI v0.9.0 just landed — and it’s like giving your workflow a turbo boost 🚀

    • Native WebSockets → Real-time node updates & buttery-smooth previews, even on mobile 📱
    • Load Image from URL → Drop any public image link in and start generating instantly — no downloads needed
    • Faster Node Search → Fuzzy matching + category filters = find your node in 2 seconds, not 20
    • Python 3.11+ Default → Speedier execution, cleaner installs, fewer dependency headaches
    • GPU Memory Optimized → Longer workflows? Less OOM crashes. More renders, less waiting

    Plus:

    ✅ Dark Mode toggle (finally!) 🌙

    ✅ 40+ bugs squashed (including the infamous “node disconnects after 5 mins” glitch)

    ✅ New community templates in the launcher — steal, tweak, and share!

    If you’re deep into custom pipelines, this isn’t just an update — it’s a workflow revolution.

    Grab it now → https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.14.0-rc7: scripts: increase notarization timeout to 20m (#13697)

    Ollama – v0.14.0-rc7: scripts: increase notarization timeout to 20m (#13697)

    Ollama v0.14.0-rc7 just dropped—and it’s a quiet win for M-series Mac users 🍏💻

    The big change? Notarization timeout bumped from 10 to 20 minutes. Why? Because that massive `100MB mlx.metallib` file (used for Apple Silicon ML acceleration) was crashing the process mid-check. Now? Smooth sailing. No more cryptic timeouts, just seamless installs for your M1/M2/M3 rigs.

    Still notarizing? You’re welcome. 😎

    Perfect for devs who want to run Llama 3, DeepSeek-R1, or Mistral locally without wrestling with macOS security hoops.

    #Ollama #LLMs #AppleSilicon #LocalAI

    🔗 View Release

  • Ollama – v0.14.0-rc6

    Ollama – v0.14.0-rc6

    Ollama v0.14.0-rc6 is here — quiet update, big win for devs! 🛠️

    CMake now uses `CMAKE_SYSTEM_PROCESSOR` instead of the deprecated `CMAKE_OSX_ARCHITECTURES`, so building from source on M1/M2 Macs just got way smoother. No more arch detection headaches — clean, future-proof, and cross-platform ready.

    Still a release candidate, but this is the kind of solid under-the-hood polish that makes local LLM running even more reliable. Model optimizations and API tweaks are rumored to drop in the final v0.14.0 soon.

    Keep your Ollama installs fresh — local AI just got a little more stable. 🚀

    #Ollama #LLM #DevTools

    🔗 View Release