Author: Tater Totterson

  • Ollama – v0.14.0-rc5

    Ollama – v0.14.0-rc5

    🚀 Ollama v0.14.0-rc5 just dropped — and macOS users, your LLM game just got a serious upgrade!

    MLX Metal library bundled — Native GPU acceleration on Apple Silicon is now smooth and stable.

    🛠️ rpath fixes — No more “library not found” crashes. Ollama finally feels at home on Mac.

    📦 MLX support added — The foundation’s laid for blazing-fast, native Metal-powered inference on M-series chips.

    This isn’t just a patch — it’s the missing piece Mac users have been waiting for. Cleaner installs. Faster inference. Zero headaches.

    RC5 is likely the final step before v0.14.0 drops… time to update and feel the difference! 🍏💻

    🔗 View Release

  • Ollama – v0.14.0-rc4

    Ollama – v0.14.0-rc4

    🚀 Ollama v0.14.0-rc4 just dropped — and it’s fixing the annoying MLX build hiccups on macOS & Docker! 🖼️💻

    If you’ve been trying to run LLaVA or other vision models on Apple Silicon and kept hitting “MLX not found” errors? Say goodbye to the frustration. This patch nails the build scripts so MLX works reliably — no more wrestling with toolchains.

    ✅ What’s fixed:

    • MLX build scripts now work smoothly on macOS (M-series chips, rejoice!)
    • Dockerfile updated to bundle MLX deps properly for image gen in containers

    No flashy new features — just stable, reliable local image generation. Perfect for devs prepping for v0.14’s full launch. Keep those M-chips humming and start generating again! 🚀

    🔗 View Release

  • Ollama – v0.14.0-rc3

    Ollama – v0.14.0-rc3

    Ollama v0.14.0-rc3 just landed — and it’s got web smarts! 🌐

    Say goodbye to outdated answers. Now you can:

    • 🔍 Use `–web-search` to let your model hunt down live info on the fly
    • 📄 Use `–web-fetch` to pull content from any URL and feed it straight into your LLM

    Ask “What’s the latest on Mars rover discoveries?” — and Ollama actually checks. No more 2023 brain fog.

    Perfect for RAG pipelines, research bots, or just keeping your AI in the loop.

    Works on macOS, Windows, Linux — same slick CLI you already love.

    Still a release candidate, but this feels like the start of something wild.

    Keep your models curious. 🧠✨

    🔗 View Release

  • Tater – Tater v47

    Tater – Tater v47

    🥔 Tater v47 just dropped—and it’s alive with smarter voice convos! 🎤

    Continued Conversations

    Tater now senses when you speak and automatically reopens the mic after your reply ends—no more “Hey Tater” spam. It waits for silence, avoids cut-offs, and keeps the flow natural.

    🏠 Smart Room Awareness

    Say “turn the lights on” and Tater knows which room you’re in—no device prefixes needed. Works with any Voice PE naming style. Pure magic.

    🧠 Natural Flow, No Repetition

    Your context sticks around during a session. Conversations feel human—no robotic loops, just smooth back-and-forth.

    ⚙️ Under the Hood

    • Tighter idle detection
    • Per-session follow-up limits
    • Polished stability, fewer glitches

    This isn’t just an update—it’s your voice assistant finally getting you.

    Check the README to upgrade!

    🔗 View Release

  • Tater – Tater v46

    Tater – Tater v46

    🎙️ Tater v46 just dropped — and it’s finally listening like a human.

    No more “in the kitchen, please.” Say “turn on the lights” — and Tater knows you’re in the kitchen. 🏠

    Room-aware voice control? Check. Timers that follow your mic? Check. Audio auto-playing where you spoke? Double check.

    🔥 New in v46:

    • Room-aware voice control — Your device knows where you are. No config needed.
    • Voice PE timers = device-bound — Start a timer in the bathroom? It stays there.
    • Smart media routing — ComfyUI Audio Ace plays on the mic that triggered it.
    • Home Assistant upgrade — Now sends rich device + area context (update your HA agent to unlock it!).
    • Plugins? Still work. Backward-compatible, no drama.
    • Cleaner, faster, more natural — Voice feels less like a bot… and more like your roommate.

    Plug in, speak up, and let Tater handle the rest. 🐔✨

    Check it out: https://github.com/TaterTotterson/Tater

    🔗 View Release

  • Ollama – v0.14.0-rc2

    Ollama – v0.14.0-rc2

    Hey AI tinkerers! 🚀 Ollama just dropped v0.14.0-rc2 — small but mighty!

    🔹 Removed an unused `COPY` command from the Dockerfile (#13664) — cleaner builds, less bloat, more speed.

    🔹 Same slick local LLM experience you love — just leaner and meaner.

    No new models, no UI tweaks… just pure developer hygiene. Perfect if you like your containers tight and your prompts crisp.

    Big v0.14.0 is rumored to be coming soon… 🤫 #Ollama #LocalLLMs #DevTools

    🔗 View Release

  • Ollama – v0.14.0-rc1

    Ollama – v0.14.0-rc1

    Ollama v0.14.0-rc1 just dropped — and it’s generating magic 🖼️🚀

    Meet z-image: Ollama’s first foray into local AI image generation. Now you can type `ollama generate z-image “a cat in a spacesuit”` and watch your terminal turn text into visuals — all offline, all on your machine. No cloud. No waiting. Just pure local AI vibes.

    This is experimental (yes, bugs ahead!), but it’s huge: Ollama’s going multimodal. Text + images — all from your CLI or API, just like LLMs.

    GGUF? Still supported. Custom models? Yep. Now with pixels.

    Docs coming soon — but if you’re brave, go ahead and test it. Train your own z-image models. Make a robot squirrel in a trench coat. The future’s local now. 🐱💻

    🔗 View Release

  • Lemonade – v9.1.3

    Lemonade – v9.1.3

    🚀 Lemonade v9.1.3 just dropped — your local LLM rig just got a serious upgrade!

    • 🌐 Remote Access: Run Lemonade from anywhere with `lemonade-server` — control your AI locally, even from your phone.
    • 💾 Save Custom Loads: Stop retyping params! Use CLI or `/load` to save and reload model configs instantly.
    • 🚀 LFM2.5 Support: Powered by FastFlow LM v0.9.25 — faster, smoother inference with zero setup hassle.
    • 🐳 Docker Ready: Official image + GitHub Actions pipeline — deploy in seconds, not hours.
    • 🐧 Fedora Love: Native RPM packages now available — no more compiling from source.
    • 🧹 Cleaner Backend: Removed state from llamacpp — fewer leaks, more stability.
    • Server Detection Fixed: Linux users — auto-detection finally works as intended.

    Big props to @SidShetye and the crew!

    Grab it, containerize it, remote-control your LLMs — and go play. 🛠️

    Full changelog: [v9.1.2…v9.1.3](link)

    🔗 View Release

  • Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama – v0.14.0-rc0: Add experimental MLX backend and engine with imagegen support (#13648)

    Ollama v0.14.0-rc0 just landed — and Apple Silicon fans, this one’s for you 🍏💥

    Say hello to experimental MLX backend support — run LLMs natively on M-series chips without CUDA or PyTorch overhead. Faster, leaner, and totally Apple-native.

    What’s new?

    • 🖼️ Image generation — yes, you can now generate images directly via Ollama (early but wildly cool)
    • 🛠️ Built-in build toggles: `cmake –preset MLX` and `go build -tags mlx .` for easy custom compiles
    • 🍎 Full macOS support — x86 & ARM builds ready, CPU-only for now (GPU accel coming soon!)
    • 📚 Cleaner docs + improved tokenizer guides — because nobody likes cryptic configs

    This is still a release candidate, so expect bugs… but if you’re on Mac and want to skip the bloat? Now’s your chance. Break it, tweak it, report it — we’re all in this together 🚀

    #MLX #AppleSilicon #ImageGen #Ollama #AIOnMac

    🔗 View Release

  • Text Generation Webui – v3.23

    Text Generation Webui – v3.23

    ✨ Chat UI got a glow-up! Tables and dividers now look clean, crisp, and way easier to read—perfect for scrolling through long model outputs without eye strain.

    🔧 Bug fixes that actually matter:

    • Models with `eos_token` disabled? No more crashes! Huge props to @jin-eld 🙌
    • Symbolic link issues in `llama-cpp-binaries` fixed—non-portable installs breathe easier now.

    🚀 Backend power-up:

    • `llama.cpp` updated to latest commit (`55abc39`) → faster, smoother inference
    • `bitsandbytes` bumped to 0.49 → better quantization, fewer OOMs, more stable loads

    📦 PORTABLE BUILDS ARE LIVE!

    Download. Unzip. Run. No install needed.

    • NVIDIA? → `cuda12.4`
    • AMD/Intel GPU? → `vulkan`
    • CPU-only? → `cpu`
    • Mac Apple Silicon? → `macos-arm64`

    💾 Updating? Just grab the new zip, unzip, and drop your old `user_data` folder in. All your models, settings, themes—still there. Zero reconfiguring.

    Go play. No setup. Just pure LLM magic. 🚀

    🔗 View Release