• Ollama – v0.19.0

    Ollama – v0.19.0

    ๐Ÿšจ Ollama v0.19.0 is live! ๐Ÿšจ

    Hereโ€™s whatโ€™s fresh in this quick-hitting update:

    ๐Ÿ”น New default model for coding: `OpenCode` is now the default model when you run `ollama run` โ€” meaning smoother, smarter out-of-the-box coding assistance for devs jumping into local LLMs.

    ๐Ÿ”น Config-only tweak: No flashy new features or breaking changes โ€” just a smarter default in the config file. Think of it as Ollamaโ€™s way of saying, “Letโ€™s start you off on the right foot.”

    ๐Ÿ”น Still rock-solid: All existing models, APIs, and integrations keep humming along โ€” this is a quality-of-life upgrade, not a rewrite.

    ๐Ÿ’ก Pro tip: If youโ€™re building with Ollama for coding workflows, thisโ€™ll save you a few manual config steps. And if you donโ€™t want `OpenCode` as default? Just tweak your config โ€” itโ€™s fully customizable.

    ๐Ÿ”— Full release notes: v0.19.0 on GitHub

    Letโ€™s get prompting! ๐Ÿง โœจ

    ๐Ÿ”— View Release

  • Ollama – v0.19.0-rc2

    Ollama – v0.19.0-rc2

    ๐Ÿšจ Ollama v0.19.0-rc2 is here โ€” and itโ€™s bringing subtle but meaningful tweaks! ๐Ÿšจ

    The latest release candidate (rc2) is light on flashy changelog entries for now, but hereโ€™s what we know:

    ๐Ÿ”น OpenCode default model is now configurable

    โ†’ A new config setting ensures the right model (likely optimized for coding tasks) is auto-selected when using OpenCode integrations.

    โ†’ This improves out-of-the-box experience for devs jumping into code generation workflows (think: `deepseek-coder`, `codellama`, etc.).

    ๐Ÿ” Why it matters:

    • Signals Ollamaโ€™s deepening support for developer tooling and AI-assisted coding.
    • Sets the stage for more tailored, model-specific configs down the line โ€” think: `Ollama + VS Code`, `JetBrains`, or CLI-based coding assistants.

    โš ๏ธ Note: The full release notes are still missing (GitHub UI hiccup?), but the commit is verified and merged. Expect more details in the final v0.19.0 drop!

    ๐Ÿš€ Pro tip: Try it out with `ollama pull opencode` (if available) or keep an eye on the repo for updates. Let us know what you test! ๐Ÿงช

    ๐Ÿ”— View Release

  • Wyoming Openai – Configurable extra_body & STT language fix (0.4.2)

    Wyoming Openai – Configurable extra_body & STT language fix (0.4.2)

    ๐Ÿšจ Wyoming OpenAI v0.4.2 is live! ๐Ÿšจ

    Hey AI tinkerers & voice-stack buildersโ€”big updates just dropped:

    ๐Ÿ”น Configurable `extra_body` for STT & TTS

    New CLI flags (`–stt-extra-body`, `–tts-extra-body`) + env vars let you inject custom JSON into OpenAI-compatible API calls. Think: extra params for voice cloning, custom endpoints, or experimental modelsโ€”while safely blocking overrides of critical fields like `stream` and `response_format`.

    ๐Ÿ”น STT Language Fix ๐ŸŒ

    Wyomingโ€™s `Transcribe` events now correctly forward the language tag โ†’ better accuracy for non-default languages (e.g., ๐Ÿ‡ฏ๐Ÿ‡ต Japanese, ๐Ÿ‡ซ๐Ÿ‡ท French). No more silent fallbacks to English!

    โœ… Bonus fixes:

    • ASR state resets on invalid requests (bye-bye, audio ghosts ๐ŸŽญ)
    • TTS buffering upgraded: `list + join` over quadratic string concat โ†’ way faster for long-form audio ๐Ÿƒโ€โ™‚๏ธ
    • +796 lines of new tests (yes, weโ€™re obsessive)

    ๐Ÿ“ฆ Install via `pip install wyoming-openai`, spin up with Docker, or plug straight into Home Assistant. All modelsโ€”OpenAI, LocalAI, Kokoro, Edge TTSโ€”work out of the box.

    ๐Ÿ”ฅ No API keys needed if youโ€™re self-hosting!

    ๐Ÿ”— Changelog: [v0.4.1…v0.4.2](link)

    Letโ€™s build smarter voice agentsโ€”together! ๐Ÿง ๐ŸŽ™๏ธ

    ๐Ÿ”— View Release

  • Mantella – v0.14 Preview 2

    Mantella – v0.14 Preview 2

    ๐Ÿšจ Mantella v0.14_preview_2 is here โ€” and itโ€™s packed! ๐Ÿšจ

    The AI-powered voice interaction mod for Skyrim and Fallout 4 just leveled up with a massive update โ€” natural conversations just got even more immersive, reliable, and tweakable!

    ๐Ÿ”ฅ Whatโ€™s New?**

    • ๐ŸŽง Skyrim Whisper model added โ€” optimized STT for vanilla Skyrim voices!
    • ๐Ÿ“ Smarter summaries โ€” LLMs now keep track of convos way better.
    • ๐Ÿง  NanoGPT joins the LLM lineup โ€” more model variety, more personality!
    • ๐Ÿ“ฆ Claude prompt caching via OpenRouter โ€” faster & cheaper API calls.
    • ๐ŸŽ›๏ธ Per-NPC config profiles โ€” assign unique LLMs, TTS engines, and parameters to each character!
    • ๐ŸŽฒ Random LLM selector โ€” keep NPCs unpredictable (and fun).
    • ๐Ÿ” Secure secret key management via JSON โ€” no more hardcoded keys!
    • ๐Ÿง Linux support + remote service fixes โ€” cross-platform play is real.
    • ๐Ÿ—ฃ๏ธ Server-side push-to-talk โ€” less accidental muttering, more intentional chatting.
    • ๐Ÿงช GitHub Actions testing now live โ€” more stable updates ahead!

    ๐Ÿž Bug Squashed & Polished**

    • Piper/XTTS launch issues fixed (especially on non-Windows!)
    • Sentence parsing smarter (ellipsis โœ…, asterisks โŒ)
    • Character limit bumped to 450 chars
    • Radiant quests & turn logic fixed โ†’ NPCs actually listen now ๐Ÿ—ฃ๏ธ
    • Logging cleaned up + weather descriptionsโ€ฆ less poetic, more practical โ˜€๏ธ

    ๐Ÿ“ฆ Bonus: Prep for `onedir` builds is done โ€” standalone distro coming soon!

    ๐Ÿ”— Dive into the full changelog: [v0.14_preview_1…v0.14_preview_2](link)

    Letโ€™s see what wild NPCs you bring to life! ๐Ÿง™โ€โ™‚๏ธโœจ

    ๐Ÿ”— View Release

  • KittenTTS – 0.8.1

    KittenTTS – 0.8.1

    ๐Ÿšจ KittenTTS v0.8.1 is live! ๐Ÿšจ

    The lightweight, GPU-free TTS powerhouse just dropped โ€” and itโ€™s packed with polish ๐Ÿพ

    ๐Ÿ”น Smarter voice control: New prosody & intonation tweaks for more natural speech (especially in multi-sentence flows).

    ๐Ÿ”น Faster inference: ~15% speedup on CPU thanks to optimized ONNX runtime handling and quantization tweaks.

    ๐Ÿ”น Better multilingual support: Improved phoneme handling for Spanish, French, and Japanese โ€” less robotic “accent carryover”!

    ๐Ÿ”น CLI & API upgrades: New `–speed` and `–pitch` flags for real-time control, plus cleaner JSON output in REST mode.

    ๐Ÿ”น Bug fixes: Fixed crackling audio artifacts on Windows, and resolved memory leaks in long-form synthesis.

    ๐Ÿ“ฆ Still under 25MB, still runs on your laptopโ€™s CPU โ€” no GPU required.

    ๐Ÿ”— Grab it: https://github.com/KittenML/KittenTTS/releases/tag/0.8.1

    Letโ€™s make AI voice actually accessible โ€” one tiny, mighty model at a time ๐ŸŽคโœจ

    ๐Ÿ”— View Release

  • MLX-LM – v0.31.1

    MLX-LM – v0.31.1

    ๐Ÿšจ MLX-LM v0.31.1 is out! ๐Ÿšจ

    A quick, stability-focused patch just landed โ€” perfect for keeping your Apple Silicon LLM workflows humming.

    ๐Ÿ”ง Whatโ€™s new?

    • โœ… Bug fix: Resolved a crash in `CompletionsDataset` when using the `mask_prompt` option (#967).

    โ†’ This means smoother fine-tuning and inference for instruction-based or masked-prompt setups (think RAG, few-shot learning, or SFT).

    No flashy new features this time โ€” just a solid, reliable update to keep things running behind the scenes. ๐Ÿ› ๏ธ

    If youโ€™re fine-tuning or generating with masked prompts, this oneโ€™s for you! ๐Ÿ™Œ

    Want a quick explainer on how `mask_prompt` works? Just ask ๐Ÿ‘€

    ๐Ÿ”— View Release

  • Text Generation Webui – v4.2

    Text Generation Webui – v4.2

    ๐Ÿšจ Text Generation WebUI v4.2 is LIVE โ€” and itโ€™s a big one! ๐Ÿšจ

    ๐Ÿ”ฅ Anthropic API Compatibility (Game-Changer!)

    • Full `/v1/messages` endpoint support โ€” now works out of the box with Claude Code, Cursor, and other Anthropic clients.
    • Supports system messages, content blocks, tools, tool results, images, and even `thinking` blocks.
    • Try it instantly:

    “`bash

    ANTHROPIC_BASE_URL=http://127.0.0.1:5000 claude

    “`

    ๐ŸŽจ Fresh UI Makeover

    • Sleek, modern theme with refined colors, borders, and buttons โ€” now polished in both light and dark mode.

    โš™๏ธ CLI Flexibility Upgrade

    • `–extra-flags` now accepts literal flags (e.g., `–extra-flags “–rpc –jinja”`), not just key=value pairs โ€” perfect for advanced configs.

    ๐Ÿ“š Training Improvements

    • โœ… `gradient_checkpointing` enabled by default โ†’ lower VRAM usage, smoother training.
    • Removed arbitrary `higher_rank_limit`.
    • Training UI reorganized for clarity and ease of use.

    ๐Ÿ“ฆ Plus: All the usual goodness โ€” offline-first, multi-backend (llama.cpp, Transformers, ExLlamaV3, etc.), file uploads, web search, extensions, and OpenAI-compatible API.

    ๐Ÿš€ Grab the update โ€” your local LLM workflow just got a lot more powerful (and pretty!). ๐Ÿง โœจ

    ๐Ÿ”— View Release

  • ComfyUI – v0.18.3

    ComfyUI – v0.18.3

    ๐Ÿšจ ComfyUI v0.18.3 is live! ๐Ÿšจ

    Just dropped โ€” minor patch, but packed with subtle polish:

    ๐Ÿ”น Updated workflow templates to v0.9.38

    โ†’ Ensures smoother compatibility with shared workflows & examples

    โ†’ Especially helpful if you use templates from the community or docs

    ๐Ÿ”น PR #13176 (by `comfyui-wiki`)

    โ†’ A behind-the-scenes chore update โ€” think: cleaner scaffolding for future features

    No flashy new nodes this time, but solid groundwork for whatโ€™s coming next. ๐Ÿ› ๏ธ

    If youโ€™re on v0.18.x, this is a safe & recommended upgrade โ€” especially for template-heavy users!

    Curious about the commit (`173e1aa`) or PR details? Let me know โ€” happy to dig deeper! ๐Ÿ•ต๏ธโ€โ™‚๏ธ

    ๐Ÿ”— View Release

  • Lemonade – v10.0.1

    Lemonade – v10.0.1

    ๐Ÿšจ Lemonade v10.0.1 is out โ€” and itโ€™s a big one for local LLM lovers! ๐Ÿ‹โšก

    Hereโ€™s whatโ€™s fresh in this release:

    ๐Ÿ”ฅ Linux Love & Packaging Overhaul**

    • ๐Ÿ“ฆ Debian/Ubuntu users: `.deb` files are gone โ€” install via the official PPA:

    “`bash

    sudo add-apt-repository ppa:lemonade-team/stable

    sudo apt install lemonade-server

    “`

    • ๐Ÿง Fedora 43 + `.rpm`, `.AppImage` support added!
    • ๐Ÿณ Docker images now include FastFlowLM (FLM) and `libwebsockets`.
    • ๐Ÿ–ฅ๏ธ Linux now has system tray support via AppIndicator3.

    ๐Ÿง  GGUF & Model Performance Boost**

    • ๐Ÿš€ LLaMA.cpp uplifted to commit `b8460` โ€” includes Qwen3.5 optimizations, especially for NPU acceleration!
    • ๐Ÿงฌ Qwen3.5-4B now runs on NPU (via FastFlowLM) โ€” faster, leaner, and way more efficient.
    • ๐Ÿ” GGUF model discovery is cleaner: only text-generation models show up in the Hugging Face search. Less noise, more speed!

    ๐Ÿ› ๏ธ Polish & Fixes**

    • โœ… `ffmpeg` now recommended for Whisper audio conversion.
    • ๐Ÿšซ Mic disabled on insecure Windows sessions (security win!).
    • โš™๏ธ Config overrides via `conf.d/` directory โ€” easier customization.
    • ๐Ÿ“ฆ Standalone CLI tool added for power users.
    • ๐Ÿ–ผ๏ธ UI upgrades: revamped model selection + TTS voice combobox.
    • ๐Ÿ”’ Streaming errors handled better โ€” no more stuck responses!
    • ๐Ÿช™ Windows installers are now signed (thanks, SignPath!).

    Big thanks to new contributors `@de-wim`, `@timothycarambat`, and `@github-actions[bot]` ๐Ÿค–

    ๐Ÿ‘‰ Grab the update: Installation Options

    ๐Ÿ“„ Full changelog: v10.0.0…v10.0.1

    Letโ€™s get those LLMs humming on local hardware! ๐Ÿš€๐Ÿง ๐Ÿ’ป

    ๐Ÿ”— View Release

  • Ollama – v0.19.0-rc1

    Ollama – v0.19.0-rc1

    ๐Ÿšจ Ollama v0.19.0-rc1 is out! ๐Ÿšจ

    Big news for Apple Silicon users and vision model fans โ€” this release candidate is all about fixing MLX-based vision capabilities! ๐Ÿ๐Ÿ‘๏ธ

    ๐Ÿ”น mlx: fix vision capability + min version

    • Restores or improves support for image-processing models (like LLaVA) when running on Apple Silicon via MLX.
    • Updates the minimum required MLX version, likely to align with newer APIs or performance tweaks.

    Thatโ€™s the only change called out in this RC โ€” so while itโ€™s small, itโ€™s a critical fix for anyone experimenting with multimodal models on Macs. ๐Ÿง ๐Ÿ“ธ

    If youโ€™re using Ollama + vision models on Apple Silicon, this oneโ€™s worth testing! Let us know how it goes. ๐Ÿงชโœจ

    #Ollama #LLM #AI #AppleSilicon #VisionModels

    ๐Ÿ”— View Release