• Ollama – v0.14.2

    Ollama – v0.14.2

    🚀 Ollama v0.14.2 just dropped — tiny update, huge impact for AI agents!

    Fixed a sneaky bug in `ToolCallFunctionArguments` so nested JSON function calls no longer crash mid-execution. 🛠️

    Now your LangChain agents, custom tools, and multi-step workflows run smoother than ever.

    No breaking changes — just quiet, reliable stability for builders who rely on function calling.

    If you’re chaining tools or automating LLM workflows, this is the update that keeps your agents from falling apart.

    Upgrade and keep building! 🤖✨

    🔗 View Release

  • Ollama – v0.14.2-rc1: openai: tweak v1/responses to conform better (#13736)

    Ollama – v0.14.2-rc1: openai: tweak v1/responses to conform better (#13736)

    🚀 Ollama v0.14.2-rc1 just dropped — and it’s making your OpenAI API integrations smoother than ever!

    ✅ `/v1/` responses now perfectly mirror OpenAI’s structure — no more weird response quirks. Your existing code? Just works.

    🖼️ Bad image URLs? Say goodbye to cryptic errors — now you’ll get clear, helpful feedback.

    🧹 Under-the-hood linting fixes = cleaner code, fewer headaches.

    Perfect for devs using Ollama as a drop-in OpenAI replacement — whether you’re running Llama 3, Mistral, or Phi-4 locally.

    Keep those models humming and your APIs clean. 🛠️✨

    🔗 View Release

  • Ollama – v0.14.2-rc0

    Ollama – v0.14.2-rc0

    Hey AI tinkerers! 🚀 Ollama v0.14.2-rc0 just landed — and Mac users with Apple Silicon are in for a treat! 🍏

    MLX build instructions added to the README — now you can compile Ollama natively on M1/M2/M3 chips, bypassing Docker and getting faster, leaner local LLM inference.

    MLX = Apple’s new ML framework (think PyTorch, but built for M-series). No GPU? Still rockin’ Llama 3, DeepSeek-R1, or Mistral — just smoother and snappier.

    ⚠️ Still a release candidate, so keep an eye out for final tweaks — but if you’re tinkering on Mac? This is your golden ticket. 🎯

    Linux & Windows folks — your Ollama magic stays untouched, no worries! 💻🛠️

    🔗 View Release

  • ComfyUI – v0.9.2

    ComfyUI – v0.9.2

    ComfyUI v0.9.2 just dropped — and it’s a quiet powerhouse 🚀

    • New Node: `ImageScaleToTotalPixels` — Scale images by total pixel count, not just W/H. Say goodbye to inconsistent outputs across models!
    • Latent Upscale Boost — Sharper, faster upscaling. No more mushy details.
    • Custom Node Fixes — Your favorite third-party nodes? They work again. No more “why’s it broken?!” headaches.
    • Memory Optimized — Smoother runs on low-end GPUs. Less crashing, more generating.
    • UI Polish — Cleaner labels + better drag-and-drop feel. Tiny change, huge workflow win.
    • Missing Model Crash Fixed — No more “where’d my model go?!” panic when loading old workflows.

    If you’re upscaling, using custom nodes, or just want a more stable ride — update now. 🎨✨

    https://www.comfy.org/

    🔗 View Release

  • Ollama – v0.14.1: scripts: fix macOS auto-update signature verification failure (#13713)

    Ollama – v0.14.1: scripts: fix macOS auto-update signature verification failure (#13713)

    Ollama v0.14.1 just dropped — no flashy new models, but a silent hero fix for Mac users 🍎✨

    Turns out, those sneaky `._mlx.metallib` files (Apple’s hidden resource forks) were wrecking auto-updates by breaking code signature validation.

    The fix? Simple but slick: `ditto –norsrc` now strips out all those `._*` junk files before zipping the macOS release.

    Result? Smoother, trust-worthy auto-updates — no more “invalid signature” headaches.

    If you’re on Mac and Ollama’s been acting up during updates… this is your quiet win. 🚀

    No new features. Just a cleaner, more reliable experience. Perfect for devs who’d rather code than debug installer ghosts.

    🔗 View Release

  • Ollama – v0.14.0

    Ollama – v0.14.0

    Ollama v0.14.0 is live 🚀 and Mac users, this one’s for you!

    Apple Silicon just got a whole lot smoother — OpenBLAS is now bundled with the MLX backend. No more `brew install openblas` headaches. Just install, pull your favorite model (Llama 3? Mistral?), and go — faster inference, zero config.

    ✨ New in v0.14.0:

    • ✅ OpenBLAS built right into MLX — seamless setup on M-series chips
    • 🚀 Speed boost for local inference (yes, really)
    • 🔧 Cleaner dev experience for MLX-powered models

    Whether you’re running fine-tuned LLMs or just tinkering, Ollama keeps getting better at making local AI feel like magic.

    `ollama pull llama3` — and let the local AI party begin 🤖💻

    🔗 View Release

  • Ollama – v0.14.0-rc11

    Ollama – v0.14.0-rc11

    🚀 Ollama v0.14.0-rc11 just dropped—and Apple Silicon users, this one’s for you! 🍏⚡

    MLX now ships with OpenBLAS built-in, so inference on M-series Macs is smoother, faster, and actually plug-and-play. No more dependency hell—just `ollama run llama3` and go.

    Also in this build:

    • Smaller, leaner macOS packages
    • Fewer “why isn’t this working?” crashes
    • Stability creeping toward final release

    Perfect for devs running local LLMs on Mac—quietly powerful, seriously convenient. 🛠️💻

    Final v0.14 is coming… and it’s gonna be good.

    🔗 View Release

  • Ollama – v0.14.0-rc10

    Ollama – v0.14.0-rc10

    🚀 Ollama v0.14.0-rc10 just dropped — and it’s a quiet powerhouse for GPU users!

    CUDA library deduplication is now live 🎯

    No more bloated binaries. No more waiting for massive .tar.gz files to unpack.

    NVIDIA GPU folks on Linux/Windows: your SSDs will thank you.

    Clean, fast, efficient — this is the kind of under-the-hood polish that makes local LLMs feel seamless.

    No flashy new models this round… but the foundation just got stronger.

    v0.14.0 is almost here — keep those GPUs warm! 🔥

    #Ollama #AI #LLM #GPU #DevTools

    🔗 View Release

  • Ollama – v0.14.0-rc9

    Ollama – v0.14.0-rc9

    🚀 Ollama v0.14.0-rc9 just dropped — and it’s all about silent power for Apple Silicon users! 🍏💻

    The big fix? MLX components are now actually included in the macOS build. No more missing pieces — if you’re running Llama 3, Gemma, or Mistral on your M-series Mac, inference is smoother, faster, and fully optimized.

    No flashy new features this round — just clean, reliable polish.

    This is the quiet before the storm: v0.14 is so close, and RC9 is your green light to update and test.

    Perfect for devs who want rock-solid local LLMs before the big launch.

    Update now — your M-chip will thank you. 🛠️

    🔗 View Release

  • ComfyUI – v0.9.1

    ComfyUI – v0.9.1

    ComfyUI v0.9.1 is live — quiet update, massive quality-of-life wins! 🛠️🎨

    • Fixed pesky node crashes (image loading & custom nodes — no more mid-generate hangs!)
    • Smoother memory use on low-end GPUs — big win for folks running heavy workflows
    • Error messages now actually tell you why something failed (RIP “it broke”)
    • UI got a polish: cleaner labels + snappier canvas panning

    And悄悄… Apple Silicon M-series support is now partially live in the standalone Mac build! 🍏 Try it out and drop feedback — they’re listening.

    No flashy features… but if you’ve been battling instability, this is your upgrade. Update and get back to creating! 🚀

    🔗 View Release