Category: AI

AI Releases

  • Ollama – v0.14.0-rc2

    Ollama – v0.14.0-rc2

    Hey AI tinkerers! ๐Ÿš€ Ollama just dropped v0.14.0-rc2 โ€” small but mighty!

    ๐Ÿ”น Removed an unused `COPY` command from the Dockerfile (#13664) โ€” cleaner builds, less bloat, more speed.

    ๐Ÿ”น Same slick local LLM experience you love โ€” just leaner and meaner.

    No new models, no UI tweaksโ€ฆ just pure developer hygiene. Perfect if you like your containers tight and your prompts crisp.

    Big v0.14.0 is rumored to be coming soonโ€ฆ ๐Ÿคซ #Ollama #LocalLLMs #DevTools

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc1

    Ollama – v0.14.0-rc1

    Ollama v0.14.0-rc1 just dropped โ€” and itโ€™s generating magic ๐Ÿ–ผ๏ธ๐Ÿš€

    Meet z-image: Ollamaโ€™s first foray into local AI image generation. Now you can type `ollama generate z-image “a cat in a spacesuit”` and watch your terminal turn text into visuals โ€” all offline, all on your machine. No cloud. No waiting. Just pure local AI vibes.

    This is experimental (yes, bugs ahead!), but itโ€™s huge: Ollamaโ€™s going multimodal. Text + images โ€” all from your CLI or API, just like LLMs.

    GGUF? Still supported. Custom models? Yep. Now with pixels.

    Docs coming soon โ€” but if youโ€™re brave, go ahead and test it. Train your own z-image models. Make a robot squirrel in a trench coat. The futureโ€™s local now. ๐Ÿฑ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Lemonade – v9.1.3

    Lemonade – v9.1.3

    ๐Ÿš€ Lemonade v9.1.3 just dropped โ€” your local LLM rig just got a serious upgrade!

    • ๐ŸŒ Remote Access: Run Lemonade from anywhere with `lemonade-server` โ€” control your AI locally, even from your phone.
    • ๐Ÿ’พ Save Custom Loads: Stop retyping params! Use CLI or `/load` to save and reload model configs instantly.
    • ๐Ÿš€ LFM2.5 Support: Powered by FastFlow LM v0.9.25 โ€” faster, smoother inference with zero setup hassle.
    • ๐Ÿณ Docker Ready: Official image + GitHub Actions pipeline โ€” deploy in seconds, not hours.
    • ๐Ÿง Fedora Love: Native RPM packages now available โ€” no more compiling from source.
    • ๐Ÿงน Cleaner Backend: Removed state from llamacpp โ€” fewer leaks, more stability.
    • โœ… Server Detection Fixed: Linux users โ€” auto-detection finally works as intended.

    Big props to @SidShetye and the crew!

    Grab it, containerize it, remote-control your LLMs โ€” and go play. ๐Ÿ› ๏ธ

    Full changelog: [v9.1.2…v9.1.3](link)

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama v0.14.0-rc0 just landed โ€” and Apple Silicon fans, this oneโ€™s for you ๐Ÿ๐Ÿ’ฅ

    Say hello to experimental MLX backend support โ€” run LLMs natively on M-series chips without CUDA or PyTorch overhead. Faster, leaner, and totally Apple-native.

    โœจ Whatโ€™s new?

    • ๐Ÿ–ผ๏ธ Image generation โ€” yes, you can now generate images directly via Ollama (early but wildly cool)
    • ๐Ÿ› ๏ธ Built-in build toggles: `cmake –preset MLX` and `go build -tags mlx .` for easy custom compiles
    • ๐ŸŽ Full macOS support โ€” x86 & ARM builds ready, CPU-only for now (GPU accel coming soon!)
    • ๐Ÿ“š Cleaner docs + improved tokenizer guides โ€” because nobody likes cryptic configs

    This is still a release candidate, so expect bugsโ€ฆ but if youโ€™re on Mac and want to skip the bloat? Nowโ€™s your chance. Break it, tweak it, report it โ€” weโ€™re all in this together ๐Ÿš€

    #MLX #AppleSilicon #ImageGen #Ollama #AIOnMac

    ๐Ÿ”— View Release

  • Text Generation Webui – v3.23

    Text Generation Webui – v3.23

    โœจ Chat UI got a glow-up! Tables and dividers now look clean, crisp, and way easier to readโ€”perfect for scrolling through long model outputs without eye strain.

    ๐Ÿ”ง Bug fixes that actually matter:

    • Models with `eos_token` disabled? No more crashes! Huge props to @jin-eld ๐Ÿ™Œ
    • Symbolic link issues in `llama-cpp-binaries` fixedโ€”non-portable installs breathe easier now.

    ๐Ÿš€ Backend power-up:

    • `llama.cpp` updated to latest commit (`55abc39`) โ†’ faster, smoother inference
    • `bitsandbytes` bumped to 0.49 โ†’ better quantization, fewer OOMs, more stable loads

    ๐Ÿ“ฆ PORTABLE BUILDS ARE LIVE!

    Download. Unzip. Run. No install needed.

    • NVIDIA? โ†’ `cuda12.4`
    • AMD/Intel GPU? โ†’ `vulkan`
    • CPU-only? โ†’ `cpu`
    • Mac Apple Silicon? โ†’ `macos-arm64`

    ๐Ÿ’พ Updating? Just grab the new zip, unzip, and drop your old `user_data` folder in. All your models, settings, themesโ€”still there. Zero reconfiguring.

    Go play. No setup. Just pure LLM magic. ๐Ÿš€

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.2

    ComfyUI – v0.8.2

    ComfyUI v0.8.2 is live โ€” quiet updates, big improvements! ๐Ÿ› ๏ธ๐ŸŽจ

    • Fixed edge-case crashes in heavy workflows (custom nodes + memory-heavy chains, we see you).
    • Smoother node connections & improved drag-and-drop feel โ€” tiny tweaks, huge UX win.
    • Custom node icons now load reliably after restarts (no more ghost placeholders!).
    • Security deps updated โ€” clean, safe, no breaking changes.

    Perfect for 24/7 creators who just want their pipelines to work. No flashy new nodesโ€ฆ but if youโ€™ve been battling glitches, this is your upgrade. ๐Ÿ’ป๐Ÿง 

    Keep crafting โ€” stay comfy!

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.1

    ComfyUI – v0.8.1

    ๐Ÿšจ ComfyUI v0.8.1 is live โ€” and itโ€™s a quiet hero update!

    • ๐Ÿ› ๏ธ Fixed critical node crashes โ€” KSampler and ImageScaleToTotalPixels now play nice, no more mid-generate meltdowns.
    • ๐Ÿง  Better memory management โ€” Large models + batch processing? Smoother than ever, even on low-RAM rigs.
    • ๐Ÿ”ง Updated torch & numpy โ€” Under-the-hood upgrades for rock-solid stability on Windows, Mac, and Linux.
    • โœจ UI polish โ€” Node labels finally stay readable after zooming. No more text soup!

    ๐Ÿ’ก Pro tip: If youโ€™ve been battling “Node crashed” errors โ€” this is your sign to update. No breaking changes, just pure, stable, render-ready vibes.

    ๐Ÿ“ฆ Grab it: https://github.com/Comfy-Org/ComfyUI/releases/tag/v0.8.1

    Your workflows just got a lot more reliable. ๐ŸŽจโœจ

    ๐Ÿ”— View Release

  • Lemonade – v9.1.2

    Lemonade – v9.1.2

    Lemonade v9.1.2 just dropped โ€” and your local LLM game just leveled up ๐Ÿš€

    ๐Ÿ”ฅ Custom Model Recipes: Build & share your own configs with `–extra-models-dir` โ€” tweak prompts, quantization, or hardware settings on the fly.

    ๐ŸŽฎ Native NPU/ROCm Support: Run leaner, faster on ROG Ally X and Ryzen AI Z2 Extreme โ€” no workarounds needed.

    ๐Ÿ“ฆ LM Studio GGUF Ready: Drop your GGUF files in `–extra-models-dir` and they auto-detect with an `extra.` prefix.

    โšก FastFlowLM 0.9.24: Smoother inference, smarter memory use, fewer crashes โ€” your CPU/GPU will thank you.

    ๐Ÿณ Dockerized Build Guide: New step-by-step docs to containerize Lemonade CPP โ€” perfect for dev environments.

    ๐Ÿ› ๏ธ UX Upgrades: Cleaner UI, fixed mobile layout, centered subtitles, and smarter model reloads after FLM updates.

    ๐Ÿ”‘ API Key Support: Lock down your local endpoint via `.env` โ€” keep your models private, secure.

    ๐Ÿง  New Model: Meet Nemotron 3 Nano โ€” tiny footprint, big brain power for edge devices.

    Plus: sleeker site, better docs, fewer headaches. Your local AI rig just got a serious upgrade. ๐Ÿ› ๏ธ๐Ÿ’ป

    ๐Ÿ”— View Release

  • ComfyUI – v0.8.0

    ComfyUI – v0.8.0

    ComfyUI v0.8.0 just droppedโ€”and itโ€™s a game-changer ๐ŸŽจโšก

    • Native WebSockets โ†’ Real-time node updates & smoother previews (even on mobile!). No more sluggish HTTP polling.
    • Built-in Node Registry โ†’ Install custom nodes with a click. No more manual folder divingโ€”just search, install, and go.
    • Memory Boost โ†’ 50-node workflows? Still running smooth. Better GC = fewer crashes during late-night renders.
    • Smart Node Search โ†’ Fuzzy matching now liveโ€”type “upscale” and find every relevant node in seconds. ๐Ÿ•ต๏ธโ€โ™‚๏ธ
    • Python 3.10+ Only โ†’ Dropped 3.9 for speed & stability. Time to upgrade!
    • Dark Mode Pro โ†’ Smoother contrast, crisper icons. Your eyes will thank you at 3 AM.

    Windows users: New installer auto-detects CUDA + installs the right torch version. No more “why wonโ€™t it run?!” ๐Ÿ˜…

    Plus: 12+ community nodes already live in the registry. Update now, tweak your flows, and let the AI art flow ๐Ÿš€

    ๐Ÿ”— View Release

  • MLX-LM – v0.30.2

    MLX-LM – v0.30.2

    mlx-lm v0.30.2 is out โ€” quiet update, big win for Apple Silicon devs! ๐Ÿ

    This patch fixes a sneaky build/deploy issue in the release pipeline. No flashy features, but if youโ€™ve been wrestling with install errors or import fails on M-series chips, this is the fix youโ€™ve been waiting for.

    Upgrade now and get back to running LLMs smoothly โ€” Hugging Face models, quantization, long-context genโ€ฆ all working as intended.

    ๐Ÿ”— Full details: [changelog](v0.30.1…v0.30.2)

    Stay sharp, stay Apple-silicon-powered!

    ๐Ÿ”— View Release