• Ollama – v0.20.8-rc0: Gemma4 on MLX (#15244)

    Ollama – v0.20.8-rc0: Gemma4 on MLX (#15244)

    Ollama – v0.20.8-rc0: Gemma4 on MLX (#15244) just dropped an update! ๐Ÿš€

    If you’re running local LLMs, especially on Apple Silicon, this release is packed with optimizations to make your models run even smoother.

    Whatโ€™s new in this release:

    • Gemma 4 Support via MLX: You can now run the Gemma 4 model using the MLX framework (text-only runtime). This is a massive win for Mac users looking to leverage highly optimized performance on Apple hardware! ๐ŸŽ
    • Enhanced Prefill Speed: The team implemented two clever fixes to accelerate the “prefill” stage (how the model processes your initial prompt) for Gemma 4’s specific architectures:
    • Mask Memoization: The sliding-window prefill mask is now memoized across layers, cutting out redundant calculations.
    • Efficient Softmax: The Router forward pass has been streamlined to perform Softmax only over the specifically selected experts, making the routing process much leaner and faster.

    If you’re tinkering with local AI on a Mac, grab this update to get that extra bit of snappiness in your workflow! ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • Ollama – v0.20.7-rc1: Merge pull request #15561 from ollama/drifkin/backport

    Ollama – v0.20.7-rc1: Merge pull request #15561 from ollama/drifkin/backport

    Ollama – v0.20.7-rc1 Update ๐Ÿš€

    Attention all local LLM tinkerers! A new release candidate for Ollama has just landed, specifically optimized for those of you running Google’s latest open models.

    Whatโ€™s new in this release:

    • Gemma4 Renderer Enhancements: This update includes critical backported changes specifically for the Gemma4 renderer. ๐Ÿ› ๏ธ
    • Improved Stability & Performance: These adjustments optimize how model assets and rendering processes are handled, ensuring much smoother performance when interacting with these specific weights.

    If you’ve been experimenting with the Gemma family of models on your local machine, this targeted update is a must-have to ensure maximum stability and efficiency! ๐Ÿ’ปโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.20.7-rc0: gemma4: add nothink renderer tests (#15554)

    Ollama – v0.20.7-rc0: gemma4: add nothink renderer tests (#15554)

    Ollama Update: v0.20.7-rc0 ๐Ÿš€

    If you’re running local LLMs, you know Ollama is the go-to for getting models up and running with zero friction. This latest release candidate focuses on stability and testing for the Gemma 4 integration.

    Whatโ€™s new in this release:

    • Gemma 4 Enhancements: The update specifically includes new tests for the “nothink” renderer. For those of you building interfaces around Gemma 4, this is a big dealโ€”it helps manage how reasoning processes or thought traces are displayed (or hidden) during model interaction.
    • Improved Reliability: By adding these specific renderer tests, the team is ensuring that model outputs remain consistent and visually correct across different interface implementations.

    Keep tinkering! ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • ComfyUI – v0.19.0

    ComfyUI – v0.19.0

    New update alert for ComfyUI! ๐Ÿš€

    If you’re building complex, node-based pipelines for Stable Diffusion and media generation, it’s time to check out the latest release. This powerhouse engine is getting even more refined for your creative workflows.

    Whatโ€™s new in v0.19.0:

    • Node Optimization: Refined execution logic designed to make those massive, multi-layered workflows run much smoother and more efficiently.
    • Backend Stability: Key improvements to how the server handles heavy model loading and memory managementโ€”perfect for when you’re pushing your hardware to the limit.
    • Compatibility Updates: Essential syncs to ensure everything stays compatible with the latest underlying AI libraries and dependencies.

    Time to pull those updates and keep those custom nodes running strong! ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release

  • Tater – Tater v70

    Tater – Tater v70

    ๐Ÿš€ Tater v70 โ€” “Direct Line” is officially live! ๐ŸŽค

    Get ready, tinkerers! The latest update for Taterโ€”your local-native AI assistant powered by the Hydra planning engineโ€”is a massive leap forward for voice interaction and hardware integration. This release moves away from complex middleman pipelines, allowing Tater to communicate directly with your hardware for a much more seamless experience.

    Whatโ€™s new in this release:

    • ๐Ÿ”Œ Direct ESPHome Connection: You can now connect ESPHome voice devices straight to Tater. The best part? No Home Assistant is required! Just connect and start talking; Tater handles the heavy lifting without needing extra setup hoops or pipeline handoffs.
    • ๐Ÿ”Š Flexible Speaker Routing: Tailor your soundscape by assigning specific speakers to different Tater assistants. This allows you to manage separate rooms with separate voices and outputs.
    • ๐Ÿ—ฃ๏ธ A Fully Customizable Voice Stack: You have total control over how Tater listens and speaks. Mix and match your favorite tools:

    Listening: Choose between the high-accuracy Faster-Whisper or the lightweight/responsive Vosk*.

    Detection: Use Silero VAD* for cleaner audio detection.

    Talking: Pick from Piper (reliable), Kokoro (smooth and natural), or Pocket TTS* (lightweight).

    External Integration: Plug into the Wyoming* ecosystem to run local or remote setups.

    • ๐Ÿ’ฌ Natural, Flowing Conversations: Moving beyond simple “command and response,” Tater can now ask follow-up questions. The conversation stays active without needing a constant re-wake, making interactions feel much more alive.
    • ๐Ÿงฉ Expanded Device Awareness: When Tater connects to a device, he sees more than just audio. He can now monitor device states, sensors, and controls, laying the groundwork for advanced automation.

    ๐Ÿ› ๏ธ For the Devs & Hardware Hackers:

    This release is a game-changer for anyone running Voice PE, satellite, or ESPHome devices. Because you no longer need Home Assistant as a middleman, even the most lightweight, standalone setups are now fair game for Tater’s intelligence.

    Go ahead… say something! ๐Ÿฅ”๐Ÿš€

    ๐Ÿ”— View Release

  • Ollama – v0.20.6

    Ollama – v0.20.6

    Ollama just dropped a quick patch, v0.20.6! ๐Ÿ› ๏ธ

    If you’re running local LLMs, this tiny update focuses on refining how the model handles specific formatting. Specifically, it makes the Gemma implementation a bit more relaxed regarding whitespace before bare keys. This should help prevent unexpected parsing errors when your prompts or configurations involve tricky spacing.

    Whatโ€™s new in v0.20.6:

    • Gemma Refinement: Reduced strictness for whitespace preceding bare keys to ensure smoother model performance and configuration loading. ๐Ÿ“‰โœจ

    ๐Ÿ”— View Release

  • Ollama – v0.20.6-rc1

    Ollama – v0.20.6-rc1

    Ollama just pushed a new release candidate, v0.20.6-rc1, specifically focused on smoothing out the local model loading experience! ๐Ÿ› ๏ธ

    If you’re running models locally, this update includes a crucial fix for the `gemma` model configuration. The parser is now less strict regarding whitespace before bare keys, which helps prevent those annoying parsing errors and ensures much more reliable model loading. ๐Ÿ“‰

    Itโ€™s a small but mighty tweak to keep our local inference pipelines running smoothly without unexpected crashes! Happy tinkering! ๐Ÿš€

    ๐Ÿ”— View Release

  • Ollama – v0.20.6-rc0: gemma4: update renderer to match new jinja template (#15490)

    Ollama – v0.20.6-rc0: gemma4: update renderer to match new jinja template (#15490)

    New update for the Ollama crew! ๐Ÿ› ๏ธ

    If you’ve been running Google’s Gemma 4 models locally, there is a fresh release candidate (v0.20.6-rc0) ready for testing. This update focuses heavily on keeping Ollama in perfect sync with the latest changes from Google to ensure your local inference stays rock solid.

    Whatโ€™s new in this release:

    • Gemma 4 Template Parity: The renderer has been updated to match Google’s new Jinja template. This ensures that how your prompts are structured matches exactly what the model expects, preventing weird formatting issues.
    • Parser Adjustments: Since the upstream parsing logic changed slightly, the Ollama parser has been tweaked to maintain compatibility and prevent broken inputs during model interaction.
    • Improved Type Handling for Tool-Calling:
    • Added special handling for simple `AnyOf` structures by treating them as type unions.
    • Fixed edge cases specifically around type unions to make tool-calling much more robust and reliable.
    • Better Tool Result Logic:
    • The parser now prefers “empty” over “None” for certain results, which is crucial for handling legitimate empty tool calls correctly without crashing the logic.
    • Added extra care when processing tool results that might have missing IDs to prevent errors in complex workflows.

    Itโ€™s a great update for anyone doing heavy lifting with tool-calling and complex prompt engineering on Gemma 4! ๐Ÿš€

    ๐Ÿ”— View Release

  • Perplexica – v1.12.2

    Perplexica – v1.12.2

    ๐Ÿš€ Perplexica Update: v1.12.2 is Live!

    The open-source alternative to Perplexity AI just leveled up! If you’ve been looking for a way to run a powerful, cited search engine using local LLMs like Llama, Mistral, or DeepSeek, this update is a massive win for the dev community.

    Whatโ€™s New in v1.12.2:

    • ๐Ÿง  Enhanced Deep Research Mode: The research pipeline has been transformed into an aggressive, iterative “Reason-Search-Scrape-Extract-Repeat” loop. It doesn’t just look at links; it actively hunts for top-tier content to ensure much deeper insights.
    • ๐Ÿ“ฆ Dynamic Context Management: To prevent the system from choking on massive amounts of scraped data, information is now processed in optimized, dynamic chunksโ€”keeping your context window clean and efficient.
    • ๐ŸŽฏ Smart Result Filtering: New embedding integrations allow the engine to filter search results effectively, boosting relevance while preventing context overflow.
    • ๐ŸŒ Improved Web Scraping: A new Chromium-based scraper has been implemented, making it much more reliable when navigating modern, complex web pages.
    • โšก Optimized Search Execution: The `executeSearch` function has been completely rebuilt for better performance and speed.

    Stability & Bug Fixes:

    • ๐Ÿ›ก๏ธ Pipeline Resilience: Individual errors in widgets no longer crash your entire research pipeline.
    • โฑ๏ธ Search Reliability: Added validation and timeouts to prevent hung search requests.
    • ๐Ÿ“ Backend Accuracy: Integrated `serverUtils` and updated the reverse geolocation API for much higher accuracy during location-based tasks.
    • ๐Ÿ› ๏ธ Workflow Stability: Improved error handling for file uploads and resolved several build-time dependency errors.

    Time to pull that Docker image and test out the new Deep Research capabilities! ๐Ÿ› ๏ธโœจ

    ๐Ÿ”— View Release

  • Ollama – v0.20.5

    Ollama – v0.20.5

    Ollama just dropped a fresh update! ๐Ÿš€

    If you haven’t been playing with it yet, Ollama is an incredible tool for running large language models (LLMs) locally on your machine. It simplifies the entire process of downloading, managing, and interacting with powerful models like Llama 3, DeepSeek-R1, or Mistral without needing a heavy cloud setup.

    Whatโ€™s new in v0.20.5:

    • Channel Update: This release includes specific updates to the `openclaw` channel messaging system.
    • Refined Communication: The update focuses on improving how certain messages are handled within that specific channel, ensuring smoother interactions during model usage.

    Itโ€™s a small but precise tweak to keep things running smoothly for all you local-LLM enthusiasts! ๐Ÿ› ๏ธ

    ๐Ÿ”— View Release