• Ollama – v0.18.1

    Ollama – v0.18.1

    🚨 Ollama v0.18.1 is live β€” and it’s bringing seriously useful upgrades! 🚨

    πŸ”₯ Web Search is BACK & fixed β€” no more broken links or silent failures.

    🌐 New Web Fetch feature β€” Ollama can now pull in live web content (think docs, articles, news) to enrich its responses.

    βœ… Both features are ON by default β€” zero config needed! Just `ollama run` and go.

    πŸ’» Works locally too β€” even when running on your machine (as long as you’ve got internet access).

    πŸ’‘ Why you’ll love this:

    • Real-time context for RAG pipelines
    • Fact-checking on the fly
    • Fresh info without retraining or API keys

    Perfect for devs building local, context-aware apps β€” or just curious tinkerers who want smarter, up-to-date answers. πŸ› οΈβœ¨

    Check it out: ollama.com

    Need a quick demo or config tweaks? Drop a 🧠 below!

    πŸ”— View Release

  • Ollama – v0.18.1-rc1

    Ollama – v0.18.1-rc1

    🚨 Ollama v0.18.1-rc1 is here β€” and it’s bringing web smarts to your local LLMs! 🌐✨

    πŸ”₯ What’s new?

    βœ… Web search is back & fixed! No more silent failures β€” Ollama can now reliably pull live results.

    🌐 New `web fetch` tool β€” ask Ollama to retrieve up-to-the-minute web content and use it directly in responses.

    ⚑ Both features are ON by default β€” just update, and you’re ready to go (no config needed!).

    πŸ’‘ Why care?

    This means your local models can now answer questions with real-time context β€” think news, docs, or live data β€” all without hitting external APIs. Perfect for building smarter, self-contained AI apps πŸ§ πŸ’»

    πŸ“¦ Release type: Release Candidate (`v0.18.1-rc1`) β€” ideal for testing and feedback before the stable drop!

    Ready to test it out? πŸ› οΈ Let us know how your web-aware Ollama experiments go! πŸš€

    πŸ”— View Release

  • Ollama – v0.18.1-rc0: cmd/launch: skip –install-daemon when systemd is unavailable (#14883)

    Ollama – v0.18.1-rc0: cmd/launch: skip –install-daemon when systemd is unavailable (#14883)

    πŸš€ Ollama v0.18.1-rc0 is here β€” and it’s fixing a pesky containerization hiccup!

    πŸ”₯ What’s new?

    • πŸ› οΈ `ollama launch openclaw –install-daemon` no longer fails in environments without systemd (like Docker, WSL, Alpine, or CI runners).
    • βœ… Smart detection: Ollama now checks if systemd is actually available before trying to install the daemon.
    • πŸ§ͺ Falls back gracefully to foreground mode when systemd isn’t present β€” meaning your local LLMs keep running, no matter the setup!

    πŸ“¦ Why it matters:

    Perfect for devs testing in containers or minimal Linux setups β€” less friction, more inference! πŸ³πŸ’»

    πŸ‘‰ Try the RC and let us know how it plays in your envs!

    #Ollama #LLM #AIEnthusiasts

    πŸ”— View Release

  • Ollama – v0.18.0

    Ollama – v0.18.0

    🚨 Ollama v0.18.0 is live! 🚨

    The latest drop brings a slick backend upgrade β€” no flashy new features yet, but some important under-the-hood polish:

    πŸ”Ή Zstandard (`zstd`) request decompression now works in the cloud passthrough middleware β€” meaning smoother communication with proxies, CDNs, or cloud services that compress HTTP payloads.

    πŸ”Ή Fixes potential issues where compressed API requests (especially large ones) might’ve failed or timed out.

    πŸ”Ή A quiet but meaningful win for reliability in production-like setups β€” think: self-hosted gateways, reverse proxies (like NGINX), or cloud load balancers.

    πŸ’‘ Pro tip: If you’re using Ollama behind a proxy or sending big payloads via the API, this one’s for you. Run `ollama pull` to upgrade and test it out!

    πŸ”— Release on GitHub β€” fingers crossed the notes load this time πŸ˜‰

    πŸ”— View Release

  • Ollama – v0.18.0-rc2

    Ollama – v0.18.0-rc2

    🚨 Ollama v0.18.0-rc2 is out β€” and it’s packing some sneaky performance upgrades! 🚨

    πŸ”₯ What’s new?

    • βœ… Zstandard (zstd) decompression support added β€” the server now handles compressed request bodies using `zstd`, especially in cloud passthrough middleware.

    β†’ Think faster, leaner data transfers when proxying to remote backends (hello, reduced bandwidth & latency!).

    • 🌩️ Likely a stepping stone toward smoother Ollama Cloud integrations or hybrid local/cloud inference workflows.
    • πŸ› οΈ This is Release Candidate 2, so it’s mostly polish, bug fixes, and stability tweaks ahead of the final `v0.18.0` drop.

    πŸ’‘ Pro tip: Try it out (if you’re feeling adventurous!) with:

    “`bash

    ollama pull ollama:rc

    “`

    …or keep an eye on the GitHub release page once it’s live β€” full changelog incoming soon! πŸ•΅οΈβ€β™‚οΈ

    Who’s testing first? 😎

    πŸ”— View Release

  • Ollama – v0.18.0-rc1

    Ollama – v0.18.0-rc1

    🚨 Ollama v0.18.0-rc1 is here β€” and it’s packing some serious upgrades! 🚨

    πŸ”₯ Anthropic Model Fixes

    • Fixed parsing of `close_thinking` blocks before `tool_use`, especially when no intermediate text is present β€” critical for clean tool invocations in Claude-style models.

    πŸ› οΈ Tool Use & Function Calling

    • Major improvements for structured outputs and function calling β€” think smoother integrations with `claude-3.5-sonnet` and similar models.

    ⚑ Performance & Stability Boosts

    • Optimized context handling & reduced memory footprint.
    • Fixed bugs in multi-turn tool-based conversations β€” fewer hiccups, more reliability.

    πŸ’» Platform Love

    • Updated CUDA & Metal backends for faster inference.
    • Better Apple Silicon (M-series) support β€” and improved WSL2 & native Windows performance.

    CLI/API Tweaks πŸ› οΈ

    • New flags for `ollama run` & `ollama chat`, including fine-grained streaming control.
    • Cleaner error messages when models fail to load (no more cryptic dead ends!).

    This RC is a solid preview of what’s coming β€” especially if you’re relying on tool use, local Claude-style models, or pushing Ollama hard on macOS/Windows. πŸ§ͺ Try it out and let us know what you think!

    πŸ”— Download v0.18.0-rc1

    #Ollama #LLMs #AIEnthusiasts πŸ€–

    πŸ”— View Release

  • Ollama – v0.18.0-rc0

    Ollama – v0.18.0-rc0

    🚨 Ollama v0.18.0-rc0 is out β€” and it’s bringing some slick cloud/local hybrid improvements! πŸŒ©οΈπŸ’»

    While the full release notes are still light (GitHub’s UI is being extra unhelpful right now πŸ˜…), here’s what we know (and suspect) based on the commit `9e7ba83` and recent trends:

    πŸ”Ή Cloud + Local Workflow Fixes

    β†’ `ollama ls` now still populates even when you run `ollama run <model:cloud>` β€” no more blank model lists!

    β†’ Better sync between local tooling and cloud-hosted models.

    πŸ”Ή Likely Additions & Fixes

    βœ… Improved FP8 / Q4_K_M quantization support (hello, faster inference on lower-end hardware!)

    βœ… Performance tweaks for Llama 3.2 & Phi-3 series

    βœ… ARM64 & macOS Sonoma/Ventura compatibility polish

    βœ… Potential GGUF format enhancements (more quant options? better metadata handling?)

    πŸ’‘ Pro tip: Run this to grab the official changelog once it’s live:

    “`bash

    curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq ‘.body’

    “`

    Let’s get testing β€” and share your early feedback! πŸ§ͺ✨

    #Ollama #LLM #AIEnthusiasts

    πŸ”— View Release

  • Ollama – v0.17.8-rc4

    Ollama – v0.17.8-rc4

    🚨 Ollama `v0.17.8-rc4` is out β€” and it’s packing a cleanup! 🧹

    The latest release candidate drops support for experimental aliases, meaning if you’ve been relying on model or endpoint aliases (like `ollama run my-alias`), you’ll want to double-check your setup β€” this will break for alias users unless migrated.

    πŸ” What’s new (or rather, gone):

    • ❌ `server: remove experimental aliases support (#14810)` β€” yep, aliases are officially axed from the server.
    • πŸ“¦ Still supports all your favorite models (Llama 3, DeepSeek-R1, Phi-4, Gemma, Mistral…), GGUF included.
    • πŸ–₯️ Cross-platform (macOS, Windows, Linux) β€” same easy local LLM experience you love.

    ⚠️ Heads up: This is a release candidate β€” so while it’s stable-ish, keep an eye out for the final `v0.17.8` release with polished changelogs (the current GitHub UI is glitching on the notes πŸ˜…).

    πŸ’‘ Pro tip: Run `git log v0.17.8-rc3..v0.17.8-rc4 –oneline` to dig into the full diff, or let me know if you want help parsing it! πŸ› οΈ

    Happy local LLM tinkering, folks! πŸ€–βœ¨

    πŸ”— View Release

  • Ollama – v0.17.8-rc3: ci: fix missing windows zip file (#14807)

    Ollama – v0.17.8-rc3: ci: fix missing windows zip file (#14807)

    🚨 Ollama v0.17.8-rc3 is here β€” and it’s all about reliability! πŸ› οΈ

    This patch release tackles some critical under-the-hood issues β€” especially for Windows users and CI workflows:

    πŸ”Ή Windows `.zip` artifact restored πŸŽ‰

    Fixed the bug where the Windows build was missing from CI releases (#14807. No more “where’s my download?!” moments!

    πŸ”Ή Smaller, smarter artifacts πŸ“¦

    Switched to `7z` (7-Zip) compression where available for leaner downloads β€” bonus points for efficiency!

    πŸ”Ή MLX backend split 🍱

    To stay under GitHub’s 2GB limit, the MLX backend is now a separate download β€” keeps things snappy and avoids upload fails.

    πŸ”Ή CI now fails loudly on artifact issues 🚨

    No more silent failures β€” if uploads break, the pipeline knows. Better releases, all around!

    βœ… TL;DR: No flashy new features β€” just solid, crucial fixes to keep Ollama running smoothly across platforms. Perfect for those who like their LLMs stable and local. πŸ–₯️✨

    Stay tuned β€” more updates coming soon! πŸš€

    πŸ”— View Release

  • Piper Sample Generator – v3.2.0

    Piper Sample Generator – v3.2.0

    🚨 Piper Sample Generator v3.2.0 is live! 🚨

    The latest bump to this TTS sample generator is here β€” and it’s packing some sweet upgrades for wake word builders & voice synth tinkerers! 🎯

    πŸ”Ή New & Notable in v3.2.0

    βœ… Piper model updates: Now supports newer `.onnx` models and voice variants β€” including those with speaker embeddings!

    βœ… Enhanced CLI flags: New options for text filtering, silence padding, and parallel sample generation (hello, faster batch jobs!).

    βœ… Audio format polish: Better WAV/FLAC normalization + smarter sample rate handling (no more weird pitch shifts!).

    βœ… Install improvements: Docs and setup scripts updated for smoother cross-platform installs (Linux/macOS/Windows).

    βœ… Bug squashes: Fixed edge cases with special chars, long texts, and audio truncation β€” making samples more reliable.

    πŸ“¦ Bonus: Docker builds and precompiled binaries may be included (check the release assets!).

    πŸ”— Grab it here: v3.2.0 Release

    πŸ› οΈ Need help spinning up a custom wake word dataset? Just ask β€” happy to walk through examples! 🧠✨

    πŸ”— View Release