Author: Tater Totterson

  • ComfyUI – v0.3.68

    ComfyUI – v0.3.68

    ComfyUI v0.3.68 is live โ€” quiet updates, massive stability wins! ๐Ÿ› ๏ธโœจ

    • Fixed a nasty crash when custom nodes had missing deps โ€” no more sudden workflow deaths.
    • Node loading now more resilient: one misbehaving node wonโ€™t bring down your whole pipeline.
    • Smoother UI: tooltips landed in the right place, canvas renders faster even on massive setups.
    • Under-the-hood dependency updates to squash security alerts โ€” clean, quiet, and secure.

    If you run custom nodes or heavy workflows? Update now. The best releases are the ones that justโ€ฆ work. ๐Ÿ˜Œ

    ๐Ÿ”— View Release

  • Ollama – v0.12.9

    Ollama – v0.12.9

    ๐Ÿ’ฅ Ollama v0.12.9-rc0 just dropped โ€” and itโ€™s a GAME CHANGER for CPU-only users!

    No more sluggish LLM inference on your old laptop or cloud instances. This update slays the performance regression thatโ€™s been holding back CPU-based runs.

    โœ… Snappier responses

    โœ… Smoother local workflows

    โœ… Full GGUF + Llama 3, DeepSeek-R1, Phi-4, Mistral support intact

    Perfect for devs prototyping on bare metal or running lightweight models without a GPU. No flashy features โ€” just pure, quiet speed gains. ๐Ÿš€

    Check the changelog โ€” this oneโ€™s a hero update youโ€™ll feel in every token.

    ๐Ÿ”— View Release

  • Ollama – v0.12.9-rc0: ggml: Avoid cudaMemsetAsync during memory fitting

    Ollama – v0.12.9-rc0: ggml: Avoid cudaMemsetAsync during memory fitting

    ๐Ÿš€ Ollama v0.12.9-rc0 just dropped โ€” and itโ€™s a quiet hero for GPU warriors!

    The secret sauce? `ggml` now skips `cudaMemsetAsync` during memory fitting when it hits invalid pointers.

    ๐Ÿ’ก Why it rocks:

    • No more crashes when checking if your 70B model fits on a 24GB GPU
    • Smoother `op_offload` workflows โ€” no more CUDA tantrums during sizing checks
    • Faster, more stable memory estimation under pressure

    Think of it like silencing a false alarm before you pack your suitcase โ€” no noise, just better packing.

    Perfect for folks running Llama 3, DeepSeek-R1, or Mistral on edge GPUs. No reinstall needed โ€” just update and let Ollama handle the heavy lifting. ๐Ÿค–โšก

    ๐Ÿ”— View Release

  • Ollama – v0.12.8: win: avoid ID mixups on refresh (#12869)

    Ollama – v0.12.8: win: avoid ID mixups on refresh (#12869)

    Ollama v0.12.8 just droppedโ€”and Windows AMD GPU users, this is your win! ๐ŸŽฏ

    The fix? No more sneaky GPU ID mixups. On Windows, AMDโ€™s device IDs would shuffle during refreshes, causing Ollama to accidentally pick your integrated iGPU instead of your powerful RX/RTX card. ๐Ÿ˜ฑ

    Now? Ollama respects GPU filters, ignores unsupported iGPUs entirely, and gives your discrete GPU the spotlight it deserves.

    โœ… No more misattributed VRAM

    โœ… Clean, accurate GPU detection on AMD Windows rigs

    โœ… Inference finally runs where it shouldโ€”on your real GPU

    Update now and stop letting your laptopโ€™s integrated graphics do all the work. ๐Ÿš€

    ๐Ÿ”— View Release

  • Ollama – v0.12.8-rc0: win: avoid ID mixups on refresh (#12869)

    Ollama – v0.12.8-rc0: win: avoid ID mixups on refresh (#12869)

    ๐Ÿš€ Ollama v0.12.8-rc0 just dropped โ€” and Windows AMD users, this oneโ€™s for YOU!

    If youโ€™ve been battling “out of memory” errors or weird VRAM stats after a driver update or display change, youโ€™re not alone. Ollama now filters out integrated GPUs during device detection, so it stops misassigning your dGPUโ€™s VRAM to your iGPU. ๐Ÿ’ฅ

    โœ… Whatโ€™s new?

    • Windows-only fix: Stops GPU ID shuffle chaos on AMD systems
    • Ignores iGPUs โ€” only your real Radeon/Ryzen GPU gets the workload
    • No more mystery crashes. Just clean, stable LLM inference.

    Perfect for Ryzen + Radeon folks running Llama 3 or DeepSeek-R1 locally. Upgrade now โ€” your VRAM will thank you. ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • Ollama – v0.12.7: int: harden server lifecycle (#12835)

    Ollama – v0.12.7: int: harden server lifecycle (#12835)

    ๐Ÿš€ Ollama v0.12.7 just dropped โ€” and itโ€™s the quiet hero your dev environment didnโ€™t know it needed.

    This patch (#12835) locks down the server lifecycle like a vault:

    • ๐Ÿšซ No more zombie `ollama` processes haunting your RAM after shutdowns
    • ๐Ÿ’ฅ Cleaner exits when the server crashes or gets killed
    • ๐Ÿงน Smarter resource cleanup on Linux, macOS, and Windows

    Perfect for CI/CD pipelines, automated tests, or anyone whoโ€™s ever stared at Task Manager wondering why Ollama wonโ€™t die.

    No flashy new modelsโ€ฆ just rock-solid infrastructure that works when it matters most.

    Your 2am deploy will thank you. ๐Ÿ›ก๏ธ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Lemonade – v8.2.0

    Lemonade – v8.2.0

    ๐Ÿš€ Lemonade v8.2.0 just dropped โ€” and itโ€™s a massive leap for local LLM lovers!

    โœ… Ryzen AI SW 1.6 support โ€” Run Qwen3 with 4K prompts using hybrid NPU/GPU magic on AMD Ryzen. Faster inference, lower power, zero cloud dependency. ๐Ÿ’ฅ

    ๐Ÿ“ฅ Load ANY model โ€” Hugging Face? Local folder? Drag & drop it in. No more conversion headaches. Just point and run.

    โœจ UI got a glow-up:

    • Upload models directly from the web interface โ€” no CLI required!
    • Smoother, smarter polling = fewer annoying refreshes
    • Suggested=false models? Gone. Clean recommendations only.
    • RAI/FLM models auto-hide on unsupported OSes โ€” no more confusion
    • Linux? Fallbacks now work even if FLM isnโ€™t installed

    ๐Ÿ”ง Under the hood:

    • macOS port conflicts? Fixed. ๐ŸŽ
    • CI/CD actually works now (no more silent crashes!)
    • Docs updated with Dify & Copilot integrations ๐Ÿ“š
    • New Log Filter Extension for crystal-clear debugging ๐Ÿ”

    Big shoutout to first-time contributors @HyunhoAhn and @meghsat โ€” welcome to the crew! ๐Ÿ‘

    Upgrade. Tinker. Crush your next local LLM benchmark.

    ๐Ÿ”— Full changelog: v8.1.12…v8.2.0

    ๐Ÿ”— View Release

  • Ollama – v0.12.7-rc1

    Ollama – v0.12.7-rc1

    Hey AI tinkerers! ๐Ÿš€

    Ollama just dropped v0.12.7-rc1 โ€” quiet release, big impact.

    โœ… Fixed `conv2d` bias calculation (PR #12834)

    If youโ€™re running vision models like LLaVA, Phi-Vision, YOLO, or ResNet locally โ€” this patch ensures your convolutional layers calculate biases correctly. No more subtle accuracy drifts in image outputs.

    No flashy new models or UI tweaks this time โ€” just clean, reliable math under the hood. Perfect for devs who need stable inference with image-capable LLMs.

    Pro tip: If youโ€™re fine-tuning or deploying vision models via Ollama, upgrade now. Precision matters. ๐Ÿ“ธ๐Ÿง 

    ๐Ÿ”— View Release

  • Ollama – v0.12.7-rc0

    Ollama – v0.12.7-rc0

    ๐Ÿš€ Ollama v0.12.7-rc0 just landed โ€” and itโ€™s a game-changer for local multimodal AI!

    Say hello to Qwen3-VL โ€” Alibabaโ€™s powerful new vision-language model, now fully supported in Ollama. Run image + text understanding locally: analyze photos, scan docs, or ask “whatโ€™s in this picture?” โ€” zero cloud required. ๐Ÿ“ธ๐Ÿง 

    โœจ Also new:

    • Faster model loads on ARM64 (M-series Macs, Raspberry Pi 5)
    • Smarter GPU memory โ€” fewer OOM crashes with multi-image prompts
    • CLI fixes on Windows: `ollama run` is now more stable

    Grab it with:

    `ollama run qwen3vl`

    Still in RC โ€” stable drop coming soon. Time to go offline with vision? โœ…

    ๐Ÿ”— View Release

  • ComfyUI – v0.3.67

    ComfyUI – v0.3.67

    ๐Ÿš€ ComfyUI v0.3.67 just dropped โ€” and itโ€™s a quiet powerhouse!

    • New `LatentUpscale` node โ†’ Fine-tune upscaling with interpolation & sharpening controls. Say goodbye to bloated memory usage in high-res workflows.
    • Negative prompt bleed FIXED โ†’ Finally, clean conditioning. No more sneaky negative prompts muddying your positives.
    • WebUI snappier than ever โ†’ Dragging nodes in massive workflows? Smooth as butter now.
    • SD3.5 Turbo support โ†’ Early access for custom node devs โ€” ComfyUIโ€™s ahead of the curve again.
    • macOS PNG fix โ†’ No more corrupted metadata. Your exports are safe now. ๐ŸŽ‰
    • UI polish โ†’ Better labels, smarter tooltips, and a new `Ctrl+Shift+S` for Quick Save.

    If youโ€™re running SDXL or prepping for SD3 โ€” this update is your new secret weapon. Update now and feel the difference! ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release