• Tater – Tater v56

    Tater – Tater v56

    ๐Ÿšจ Tater v56 โ€” Cerberus Upgrade is LIVE! ๐Ÿšจ

    The core of Tater just got a massive intelligence overhaul โ€” meet Cerberus, the new 3-phase reasoning engine:

    ๐Ÿ”น Plan โ†’ Execute โ†’ Validate โ€” ensures smarter, step-by-step execution with fewer surprises.

    ๐Ÿ›ก๏ธ Tool Safety Upgraded

    • Tools only fire when absolutely intended
    • Malformed or accidental calls? Nope. Not today.

    โœ‰๏ธ Smarter Messaging

    • `send_message` now ignores casual chat โ€” only triggers on clear intent
    • “Send it here” โœ…, “Hey, send that” โŒ (nope!)

    ๐Ÿง  Context That Actually Makes Sense

    • Clean, scoped memory per conversation
    • “Do that again?” โ†’ Works now
    • Topic shifts reset context intelligently (no more weird carryover!)

    ๐Ÿ“Š Behind-the-Curtain Wins

    โœ… Smoother error handling & retries

    โœ… Reduced token bloat = lower costs & faster responses

    โœ… More deterministic behavior (yes, predictable LLMs are a thing now)

    ๐ŸŽฏ What This Means for You

    โœ”๏ธ Reliable multi-step workflows (finally!)

    โœ”๏ธ Fewer “why did it do that?!” moments

    โœ”๏ธ Natural, fluid follow-ups

    โœ”๏ธ A rock-solid foundation for future agent smarts

    ๐Ÿ”ฎ Whatโ€™s Next?

    Cerberus sets the stage for long-horizon tasks, learning, and advanced agent behavior โ€” but v56 is all about stability, safety, and raw, reliable power.

    ๐Ÿถ Cerberus is awake. ๐Ÿถ๐Ÿ”ฅ

    ๐Ÿ‘‰ Check the README to upgrade & explore!

    ๐Ÿ”— View Release

  • Heretic – v1.2.0

    Heretic – v1.2.0

    ๐Ÿšจ Heretic v1.2.0 is live โ€” and itโ€™s packing serious upgrades! ๐Ÿšจ

    The team just dropped a massive update to Heretic, the fully automatic censorship-removal tool for LLMs โ€” and this oneโ€™s packed with performance boosts, new features, and rock-solid stability fixes. Hereโ€™s whatโ€™s new:

    ๐Ÿ”น Memory & Stability Wins

    • `amax_memory` setting to cap RAM usage ๐Ÿง 
    • Smarter iteration logic to avoid getting stuck in low-divergence traps โš™๏ธ
    • Magnitude-Preserving Orthogonal Ablation โ€” finer control over refusal suppression ๐ŸŽฏ

    ๐Ÿ”น LoRA + Quantization Power-Ups

    • Brand-new LoRA-based abliteration engine ๐Ÿช„
    • Full 4-bit quantization support โ€” run decensoring on consumer GPUs ๐Ÿ–ฅ๏ธ
    • Auto-GPU detection at startup (no more manual config headaches!) ๐Ÿค–

    ๐Ÿ”น Resilience & Flexibility

    • Run extra optimization trials after the main run finishes ๐Ÿ”„
    • Save/resume support for long jobs โ€” no more lost progress! ๐Ÿ’พ
    • Fixed MXFP4 loading issues ๐Ÿ› ๏ธ
    • Refactored save/load machinery = way more reliable ๐Ÿ“ฆ

    ๐Ÿ”น Vision-Language Models? Now Supported!

    • Broad support for VL models (e.g., LLaVA-style) ๐Ÿ“ธโžก๏ธ๐Ÿง 
    • Plus: full type checking, debug tools, slop-reduction configs, and bug fixes ๐Ÿž

    ๐ŸŒŸ Shoutout to @noctrex, @accemlcc, @anrp, and @salmanmkc โ€” welcome to the crew!

    ๐Ÿ”— Full changelog: v1.1.0 โ†’ v1.2.0

    ๐Ÿš€ Grab it now and keep decensoring smarter, faster, and safer!

    ๐Ÿ”— View Release

  • Ollama – v0.16.2-rc0: mlxrunner fixes (#14247)

    Ollama – v0.16.2-rc0: mlxrunner fixes (#14247)

    ๐Ÿšจ Ollama v0.16.2-rc0 is here โ€” and itโ€™s bringing serious Apple Silicon love! ๐Ÿ

    This release candidate focuses on fixing critical bugs in the `mlxrunner` backend (hello, M1/M2/M3 users ๐Ÿ‘‹), making it way more reliable for cutting-edge models. Hereโ€™s whatโ€™s new:

    ๐Ÿ”น GLM4-MoE-Lite now loads smoothly via `mlxrunner` โ€” no more crashes on M-series!

    ๐Ÿ”น Diffusion models (like Stable Diffusion) โ€” finally load without hiccups ๐Ÿ–ผ๏ธ

    ๐Ÿ”น Cleaner logs โ€” less noise, more signal (goodbye, endless debug spam ๐Ÿงน)

    ๐Ÿ”น `–imagegen` flag now actually works for image generation workflows โœ…

    ๐Ÿ’ก Pro tip: This is a release candidate, so itโ€™s the perfect time to test stability before v0.16.2 drops โ€” especially if youโ€™re into MoE models or local image gen on Mac.

    Ready to give it a spin? ๐Ÿงช Let us know how it goes!

    ๐Ÿ”— View Release

  • Home Assistant Voice Pe – 26.2.1

    Home Assistant Voice Pe – 26.2.1

    ๐Ÿš€ Home Assistant Voice PE v26.2.1 is live!

    The latest update (from v25.12.4 โ†’ v26.2.1) is all about polish and reliabilityโ€”perfect for those relying on voice control in their smart homes, especially offline. Hereโ€™s whatโ€™s improved:

    โœ… Media playback is smoother & more stable

    No more skips or dropoutsโ€”your voice-triggered music, alerts, and announcements now play cleanly.

    ๐Ÿ› ๏ธ TTS timeouts fixed!

    Text-to-speech responses now fully render and playโ€”no more truncated or missing voice replies. ๐ŸŽ™๏ธโœจ

    ๐Ÿ’ก Bonus: The project is now officially sponsored by the Open Home Foundationโ€”a big vote of confidence in its mission for private, local-first voice control. ๐Ÿก๐Ÿ”

    78 releases down, more innovation ahead! ๐Ÿ› ๏ธ

    Check it out if youโ€™re building or expanding a local, privacy-first voice assistant setup. ๐ŸŽฏ

    ๐Ÿ”— View Release

  • Ollama – v0.16.1

    Ollama – v0.16.1

    ๐Ÿšจ Ollama v0.16.1 is live! ๐Ÿšจ

    Hey AI tinkerers & local LLM lovers โ€” fresh update incoming! ๐Ÿ”ฅ

    Whatโ€™s new in v0.16.1?

    ๐Ÿ”น New model config added: `minimax-m2.5` ๐Ÿง 

    • Looks like a fresh MiniMax model variant (internal/experimental for now โ€” keep an eye out for docs!).
    • You can already pull it via `ollama pull minimax-m2.5` if youโ€™re feeling adventurous ๐Ÿ› ๏ธ

    ๐Ÿ”น Lightweight patch release โ€” no breaking changes, just lean & mean model support upgrades.

    ๐Ÿ“ฆ Binaries are rolling out for macOS, Windows, and Linux โ€” grab the latest from GitHub or update via your package manager.

    ๐Ÿ‘‰ v0.16.1 Release Notes

    Let us know if you get `minimax-m2.5` running โ€” curious to hear your benchmarks and use cases! ๐Ÿงชโœจ

    ๐Ÿ”— View Release

  • Lemonade – v9.3.2

    Lemonade – v9.3.2

    ๐Ÿš€ Lemonade v9.3.2 is live!

    This oneโ€™s a quick but important patchโ€”especially if youโ€™re rocking AMD GPUs on Linux.

    ๐Ÿ”ง Whatโ€™s new/fixed:

    • โœ… Fixed incorrect path for Stable Diffusion ROCm artifacts on Linux

    โ†’ Fixes runtime hiccups and ensures proper loading of AMD GPU binaries

    โ†’ PR: #1085 | Commit: `5a382c5` (GPG verified!)

    ๐ŸŽฏ Why it matters:

    • ROCm users on Linux can now run SD models reliablyโ€”no more path-related crashes or config headaches.
    • No breaking changes, no flashy new featuresโ€ฆ just solid, quiet reliability ๐Ÿ› ๏ธ

    If youโ€™re using Lemonade with AMD/NPU/GPU acceleration on Linuxโ€”update now! ๐Ÿงโœจ

    Full details: lemonade-sdk/lemonade

    ๐Ÿ”— View Release

  • MLX-LM – v0.30.7

    MLX-LM – v0.30.7

    ๐Ÿš€ MLX-LM v0.30.7 is live โ€” and itโ€™s packed with model love, speed boosts, and polish!

    ๐Ÿ”ฅ New Models Added:

    • โœ… GLM-5 โ€” a powerful new contender in the LLM space ๐Ÿง 
    • โœ… Qwen3.5 (text-only) โ€” ideal for high-performance, non-vision tasks
    • โœ… DeepSeek V3.2 improvements โ€” faster indexer & smoother weight loading ๐Ÿ› ๏ธ
    • โœ… Kimi Linear bugs squashed โ€” now stable & reliable

    ๐Ÿ› ๏ธ Tooling Upgrades:

    • ๐Ÿ Pythonic tool calling for LFM2 models (huge thanks to @viktike!)
    • ๐Ÿงฐ New Mistral tool parser โ€” cleaner, more intuitive function/tool integration

    โšก Performance & Training:

    • ๐Ÿ“ˆ Faster DSV3.2 generation โ€” thanks to kernel & op-level optimizations
    • ๐Ÿ“ LongCat MLA support โ€” smarter attention for long-context generations
    • ๐Ÿ” Validation set now optional in training โ€” faster prototyping!

    ๐Ÿ‘ Shoutout to our newest contributors: @viktike & @JJJYmmm โ€” welcome to the crew!

    ๐Ÿ‘‰ Dive into the details: [v0.30.6 โ†’ v0.30.7 Changelog](link-to-changelog)

    Letโ€™s push the limits on Apple silicon โ€” together! ๐Ÿ› ๏ธ๐Ÿ’ปโœจ

    ๐Ÿ”— View Release

  • Home Assistant Voice Pe – 26.2.0

    Home Assistant Voice Pe – 26.2.0

    ๐Ÿšจ Home Assistant Voice PE v26.2.0 is live! ๐Ÿšจ

    Hey AI tinkerers & smart home wizards โ€” big shoutout to the latest update of Home Assistant Voice PE, now powered by the awesome folks at the Open Home Foundation ๐Ÿ™Œ

    ๐Ÿ”ฅ Whatโ€™s new in v26.2.0?

    โœ… Media playback stability improved โ€” fewer stutters, smoother audio/video responses during voice interactions

    ๐ŸŽ™๏ธ TTS timeout bug squashed โ€” no more cut-off replies! Full text now plays reliably, every time

    ๐Ÿ’ก Bonus context: This release builds on 78+ prior releases โ€” and now with offline-first voice control (no internet needed!), itโ€™s perfect for privacy-focused automations.

    Ready to make your smart home talk back? ๐Ÿ› ๏ธโœจ

    Check the changelog (25.12.4 โ†’ 26.2.0) and upgrade!

    ๐Ÿ”— View Release

  • Ollama – v0.16.0

    Ollama – v0.16.0

    ๐Ÿšจ Ollama v0.16.0 is live! ๐Ÿšจ

    The latest drop from the Ollama crew just landed โ€” and while the release notes are light on flashy new features, this oneโ€™s a quiet but meaningful polish pass. Hereโ€™s the lowdown:

    ๐Ÿ”น API Docs Fixed!

    The OpenAPI schema for `/api/ps` (list running processes) and `/api/tags` (list local models) has been corrected โ€” meaning better Swagger compatibility, smoother SDK generation, and fewer headaches for integrators. ๐Ÿ› ๏ธ

    ๐Ÿ”น Stability & Under-the-Hood Tweaks

    Expect refined model loading, improved streaming behavior, and likely minor bug fixes โ€” especially around context handling and memory usage. No breaking changes, just smoother sailing.

    ๐Ÿ”น Still GGUF-Friendly

    All your favorite quantized models (Llama 3, DeepSeek-R1, Phi-4, etc.) keep rolling โ€” no format changes here.

    ๐Ÿ’ก Pro Tip: If youโ€™re building tools or dashboards against Ollamaโ€™s REST API, this update makes your life easier. Run `ollama pull ollama/ollama:latest` or grab the latest binary from GitHub.

    ๐Ÿ‘‰ Full details (when they land): v0.16.0 Release

    Happy local LLM-ing! ๐Ÿค–โœจ

    ๐Ÿ”— View Release

  • Ollama – v0.16.0-rc2

    Ollama – v0.16.0-rc2

    ๐Ÿšจ Ollama v0.16.0-rc2 is out! ๐Ÿšจ

    This release candidate is a light but tidy patch focused on API docs & stabilityโ€”perfect for keeping your integrations humming. Hereโ€™s the lowdown:

    ๐Ÿ”น Fixed OpenAPI schema for two key endpoints:

    • `/api/ps` โ€” now correctly documents the list running processes response
    • `/api/tags` โ€” updated to reflect accurate model tag listing behavior

    โœ… Why it matters: If youโ€™re using SDKs, auto-generated clients, or UI tools that rely on the OpenAPI spec (like Swagger), this ensures theyโ€™ll work exactly as expected.

    ๐Ÿ“ฆ Binaries for macOS, Linux & Windows are already up (2 assets).

    ๐Ÿ“… Released by `sam18` on Feb 12 @ 01:37 UTC

    ๐Ÿ”— Commit: `f8dc7c9`

    No flashy new models or breaking changesโ€”just solid polish for the upcoming v0.16.0! ๐Ÿ› ๏ธ

    Want a sneak peek at whatโ€™s actually new in v0.16 (beyond rc2)? Just say the word! ๐Ÿ˜„

    ๐Ÿ”— View Release