Category: AI

AI Releases

  • Ollama – v0.14.0-rc6

    Ollama – v0.14.0-rc6

    Ollama v0.14.0-rc6 is here โ€” quiet update, big win for devs! ๐Ÿ› ๏ธ

    CMake now uses `CMAKE_SYSTEM_PROCESSOR` instead of the deprecated `CMAKE_OSX_ARCHITECTURES`, so building from source on M1/M2 Macs just got way smoother. No more arch detection headaches โ€” clean, future-proof, and cross-platform ready.

    Still a release candidate, but this is the kind of solid under-the-hood polish that makes local LLM running even more reliable. Model optimizations and API tweaks are rumored to drop in the final v0.14.0 soon.

    Keep your Ollama installs fresh โ€” local AI just got a little more stable. ๐Ÿš€

    #Ollama #LLM #DevTools

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc5

    Ollama – v0.14.0-rc5

    ๐Ÿš€ Ollama v0.14.0-rc5 just dropped โ€” and macOS users, your LLM game just got a serious upgrade!

    โœ… MLX Metal library bundled โ€” Native GPU acceleration on Apple Silicon is now smooth and stable.

    ๐Ÿ› ๏ธ rpath fixes โ€” No more “library not found” crashes. Ollama finally feels at home on Mac.

    ๐Ÿ“ฆ MLX support added โ€” The foundationโ€™s laid for blazing-fast, native Metal-powered inference on M-series chips.

    This isnโ€™t just a patch โ€” itโ€™s the missing piece Mac users have been waiting for. Cleaner installs. Faster inference. Zero headaches.

    RC5 is likely the final step before v0.14.0 dropsโ€ฆ time to update and feel the difference! ๐Ÿ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc4

    Ollama – v0.14.0-rc4

    ๐Ÿš€ Ollama v0.14.0-rc4 just dropped โ€” and itโ€™s fixing the annoying MLX build hiccups on macOS & Docker! ๐Ÿ–ผ๏ธ๐Ÿ’ป

    If youโ€™ve been trying to run LLaVA or other vision models on Apple Silicon and kept hitting “MLX not found” errors? Say goodbye to the frustration. This patch nails the build scripts so MLX works reliably โ€” no more wrestling with toolchains.

    โœ… Whatโ€™s fixed:

    • MLX build scripts now work smoothly on macOS (M-series chips, rejoice!)
    • Dockerfile updated to bundle MLX deps properly for image gen in containers

    No flashy new features โ€” just stable, reliable local image generation. Perfect for devs prepping for v0.14โ€™s full launch. Keep those M-chips humming and start generating again! ๐Ÿš€

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc3

    Ollama – v0.14.0-rc3

    Ollama v0.14.0-rc3 just landed โ€” and itโ€™s got web smarts! ๐ŸŒ

    Say goodbye to outdated answers. Now you can:

    • ๐Ÿ” Use `–web-search` to let your model hunt down live info on the fly
    • ๐Ÿ“„ Use `–web-fetch` to pull content from any URL and feed it straight into your LLM

    Ask “Whatโ€™s the latest on Mars rover discoveries?” โ€” and Ollama actually checks. No more 2023 brain fog.

    Perfect for RAG pipelines, research bots, or just keeping your AI in the loop.

    Works on macOS, Windows, Linux โ€” same slick CLI you already love.

    Still a release candidate, but this feels like the start of something wild.

    Keep your models curious. ๐Ÿง โœจ

    ๐Ÿ”— View Release

  • Tater – Tater v47

    Tater – Tater v47

    ๐Ÿฅ” Tater v47 just droppedโ€”and itโ€™s alive with smarter voice convos! ๐ŸŽค

    โœจ Continued Conversations

    Tater now senses when you speak and automatically reopens the mic after your reply endsโ€”no more “Hey Tater” spam. It waits for silence, avoids cut-offs, and keeps the flow natural.

    ๐Ÿ  Smart Room Awareness

    Say “turn the lights on” and Tater knows which room youโ€™re inโ€”no device prefixes needed. Works with any Voice PE naming style. Pure magic.

    ๐Ÿง  Natural Flow, No Repetition

    Your context sticks around during a session. Conversations feel humanโ€”no robotic loops, just smooth back-and-forth.

    โš™๏ธ Under the Hood

    • Tighter idle detection
    • Per-session follow-up limits
    • Polished stability, fewer glitches

    This isnโ€™t just an updateโ€”itโ€™s your voice assistant finally getting you.

    Check the README to upgrade!

    ๐Ÿ”— View Release

  • Tater – Tater v46

    Tater – Tater v46

    ๐ŸŽ™๏ธ Tater v46 just dropped โ€” and itโ€™s finally listening like a human.

    No more “in the kitchen, please.” Say “turn on the lights” โ€” and Tater knows youโ€™re in the kitchen. ๐Ÿ 

    Room-aware voice control? Check. Timers that follow your mic? Check. Audio auto-playing where you spoke? Double check.

    ๐Ÿ”ฅ New in v46:

    • Room-aware voice control โ€” Your device knows where you are. No config needed.
    • Voice PE timers = device-bound โ€” Start a timer in the bathroom? It stays there.
    • Smart media routing โ€” ComfyUI Audio Ace plays on the mic that triggered it.
    • Home Assistant upgrade โ€” Now sends rich device + area context (update your HA agent to unlock it!).
    • Plugins? Still work. Backward-compatible, no drama.
    • Cleaner, faster, more natural โ€” Voice feels less like a botโ€ฆ and more like your roommate.

    Plug in, speak up, and let Tater handle the rest. ๐Ÿ”โœจ

    Check it out: https://github.com/TaterTotterson/Tater

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc2

    Ollama – v0.14.0-rc2

    Hey AI tinkerers! ๐Ÿš€ Ollama just dropped v0.14.0-rc2 โ€” small but mighty!

    ๐Ÿ”น Removed an unused `COPY` command from the Dockerfile (#13664) โ€” cleaner builds, less bloat, more speed.

    ๐Ÿ”น Same slick local LLM experience you love โ€” just leaner and meaner.

    No new models, no UI tweaksโ€ฆ just pure developer hygiene. Perfect if you like your containers tight and your prompts crisp.

    Big v0.14.0 is rumored to be coming soonโ€ฆ ๐Ÿคซ #Ollama #LocalLLMs #DevTools

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc1

    Ollama – v0.14.0-rc1

    Ollama v0.14.0-rc1 just dropped โ€” and itโ€™s generating magic ๐Ÿ–ผ๏ธ๐Ÿš€

    Meet z-image: Ollamaโ€™s first foray into local AI image generation. Now you can type `ollama generate z-image “a cat in a spacesuit”` and watch your terminal turn text into visuals โ€” all offline, all on your machine. No cloud. No waiting. Just pure local AI vibes.

    This is experimental (yes, bugs ahead!), but itโ€™s huge: Ollamaโ€™s going multimodal. Text + images โ€” all from your CLI or API, just like LLMs.

    GGUF? Still supported. Custom models? Yep. Now with pixels.

    Docs coming soon โ€” but if youโ€™re brave, go ahead and test it. Train your own z-image models. Make a robot squirrel in a trench coat. The futureโ€™s local now. ๐Ÿฑ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Lemonade – v9.1.3

    Lemonade – v9.1.3

    ๐Ÿš€ Lemonade v9.1.3 just dropped โ€” your local LLM rig just got a serious upgrade!

    • ๐ŸŒ Remote Access: Run Lemonade from anywhere with `lemonade-server` โ€” control your AI locally, even from your phone.
    • ๐Ÿ’พ Save Custom Loads: Stop retyping params! Use CLI or `/load` to save and reload model configs instantly.
    • ๐Ÿš€ LFM2.5 Support: Powered by FastFlow LM v0.9.25 โ€” faster, smoother inference with zero setup hassle.
    • ๐Ÿณ Docker Ready: Official image + GitHub Actions pipeline โ€” deploy in seconds, not hours.
    • ๐Ÿง Fedora Love: Native RPM packages now available โ€” no more compiling from source.
    • ๐Ÿงน Cleaner Backend: Removed state from llamacpp โ€” fewer leaks, more stability.
    • โœ… Server Detection Fixed: Linux users โ€” auto-detection finally works as intended.

    Big props to @SidShetye and the crew!

    Grab it, containerize it, remote-control your LLMs โ€” and go play. ๐Ÿ› ๏ธ

    Full changelog: [v9.1.2…v9.1.3](link)

    ๐Ÿ”— View Release

  • Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama v0.14.0-rc0 just landed โ€” and Apple Silicon fans, this oneโ€™s for you ๐Ÿ๐Ÿ’ฅ

    Say hello to experimental MLX backend support โ€” run LLMs natively on M-series chips without CUDA or PyTorch overhead. Faster, leaner, and totally Apple-native.

    โœจ Whatโ€™s new?

    • ๐Ÿ–ผ๏ธ Image generation โ€” yes, you can now generate images directly via Ollama (early but wildly cool)
    • ๐Ÿ› ๏ธ Built-in build toggles: `cmake –preset MLX` and `go build -tags mlx .` for easy custom compiles
    • ๐ŸŽ Full macOS support โ€” x86 & ARM builds ready, CPU-only for now (GPU accel coming soon!)
    • ๐Ÿ“š Cleaner docs + improved tokenizer guides โ€” because nobody likes cryptic configs

    This is still a release candidate, so expect bugsโ€ฆ but if youโ€™re on Mac and want to skip the bloat? Nowโ€™s your chance. Break it, tweak it, report it โ€” weโ€™re all in this together ๐Ÿš€

    #MLX #AppleSilicon #ImageGen #Ollama #AIOnMac

    ๐Ÿ”— View Release