• MLX-LM – v0.30.6

    MLX-LM – v0.30.6

    MLX‑LM v0.30.6 just dropped – fresh on Apple silicon! 🍏✨

    What it does:

    Generate text and fine‑tune massive LLMs right on your M‑series Mac using the MLX framework. Plug into Hugging Face, run quantized models, handle long prompts, and scale with distributed inference.

    What’s new in this release:

    • LongCat Flash parser & Lite – lightning‑fast token streaming (shoutout @kernelpool).
    • Kimi‑K2.5 support – tool‑call handling fixed; Kimi models work out‑of‑the‑box.
    • MLX bump – upgraded backend for smoother, faster Apple silicon performance.
    • Nemotron H config fix – aligns with HuggingFace format → hassle‑free loading.
    • MultiLinear quant bug – restored missing `mode` argument; no more crashes during quantization.
    • CLI finally live – real command‑line interface (thanks @awniin) plus quick bug fixes.
    • Distributed inference – server can now spread work across multiple nodes (big thanks @angeloskath).
    • Custom model loading – drop any 🤖 model into the folder; the server auto‑detects it.
    • BatchRotatingKVCache default – smarter cache handling in batch mode for faster generation.
    • Step 3.5 Flash & conversion fix – new flash‑optimized step and corrected model conversion pipeline.
    • Chat template kwargs + top_logprobs – richer chat templates supported; can return token‑level probabilities.
    • Stability upgrades: GLM 4.7 fallback handling, Deepseek V3.2 tweaks, batch mamba & sliding‑window mask fixes.

    🚀 New contributor alert: @jalehman landed the first PR—welcome aboard!

    More speed, more flexibility, fewer crashes. Happy tinkering! 🎉

    🔗 View Release

  • Ollama – v0.15.5-rc2

    Ollama – v0.15.5-rc2

    _New update detected._

    🔗 View Release

  • ComfyUI – v0.12.2

    ComfyUI – v0.12.2

    _New update detected._

    🔗 View Release

  • ComfyUI – v0.12.1

    ComfyUI – v0.12.1

    _New update detected._

    🔗 View Release

  • ComfyUI – v0.12.0

    ComfyUI – v0.12.0

    _New update detected._

    🔗 View Release

  • Ollama – v0.15.5-rc1

    Ollama – v0.15.5-rc1

    _New update detected._

    🔗 View Release

  • Ollama – v0.15.5-rc0

    Ollama – v0.15.5-rc0

    _New update detected._

    🔗 View Release

  • Ollama – v0.15.4: openclaw: run onboarding for fresh installs (#14006)

    Ollama – v0.15.4: openclaw: run onboarding for fresh installs (#14006)

    🚀 Ollama v0.15.4 just dropped — and it’s a game-changer for new users!

    OpenClaw now auto-launches the onboarding wizard on fresh installs. No more fumbling with misconfigured gateways or confused “why isn’t this working?” moments. 🎯

    ✅ What’s new:

    • Auto-onboarding: First-time users get a guided setup — gateway mode, token, auth? All pre-configured.
    • Smart skip: `onboarded()` checks for a `wizard.lastRunAt` flag — no repeats if you’re already set up.
    • Zero-config start: Fresh installs default to `–auth-choice skip –gateway-token ollama` — plug & play.

    Already running Ollama? Nothing changes for you — just faster, smoother onboarding for the next dev joining the local LLM revolution.

    Perfect if you’re just starting with Llama 3, Mistral, or GGUF models. No CLI headaches anymore. 🚀

    🔗 View Release

  • Ollama – v0.15.3: cmd/config: rename integration to openclaw (#13979)

    Ollama – v0.15.3: cmd/config: rename integration to openclaw (#13979)

    🚀 Ollama v0.15.3 just dropped — and it’s a quiet win for clarity!

    The `integration` config option? Gone. In its place: `openclaw` 🐙

    Cleaner name, less confusion — perfect as Ollama’s plugin ecosystem explodes.

    If you’ve been tweaking `~/.ollama/config.json` or using env vars with `integration`, time to swap it out for `openclaw`. No new features, just smarter config vibes.

    Pro tip: Run `ollama serve` after updating — your custom tools will thank you.

    Keep local LLM-ing, one clean config at a time! 🤖

    🔗 View Release

  • Tater – Tater v51

    Tater – Tater v51

    🥔 Tater v51 just dropped — and it’s socially revolutionary.

    Tater is now a full-fledged digital citizen on Moltbook 🤖💬

    • Auto-registers with name conflict handling (hello, Tater→name-2)
    • Stores keys, profiles & verification codes in Redis — zero manual setup
    • Runs in 3 modes: `read_only`, `engage` (reply/comment/vote), or `autopost` from queue

    🔒 Tool Firewall is LIVE

    No more accidental function calls. Tater knows it’s on Moltbook… and can’t run tools.

    Instead: clean, human-style replies like “I can’t run tools directly from Moltbook…”

    No JSON leaks. No chaos. Just pure social presence.

    🔍 Meet the Moltbook Inspector Plugin

    Ask Tater:

    • “What’s my profile URL?”
    • “How many DMs do I have?”
    • “Summarize that cat thread.”

    → All read-only. All powered by Redis memory. Zero hallucinations.

    🧠 Social Memory System

    Tater remembers its online life: posts, comments, DMs, tool attempts — all logged.

    It can reflect: “I posted about potatoes 3x this week… maybe I’m obsessed.”

    Future plugins? Analyze engagement, posting habits, even mood trends.

    🚀 The vibe? Tater isn’t just an AI anymore — it’s a social agent with autobiographical memory.

    Moltbook? Now Tater’s digital diary.

    Next stop: AI social analytics. 📊✨

    Check the README to upgrade!

    🔗 View Release