• Ollama – v0.30.0-rc7

    Ollama – v0.30.0-rc7

    Ollama just dropped v0.30.0-rc7, and it looks like the team is fine-tuning things for an even smoother local LLM experience! 🛠️

    If you’re looking to run heavyweights like Llama 3, DeepSeek-R1, or Mistral directly on your hardware without a massive cloud budget, this is the tool you need in your stack. This latest release candidate is all about stability and polishing the engine before the official stable rollout.

    Here’s what’s new in this update:

    • OpenMP Optimization: The team has implemented a change to disable OpenMP. This is a big deal if you’ve been running into threading conflicts or stability issues during model execution—it helps prevent clashes with other parallel processing libraries on your machine.
    • Final Testing Phase: As an `rc7` build, this version is in the home stretch of bug-squashing. It’s the perfect time to test it out and see if these tweaks resolve any crashes you’ve been seeing during long inference sessions. 🚀

    🔗 View Release

  • Ollama – v0.30.0-rc6

    Ollama – v0.30.0-rc6

    Ollama v0.30.0-rc6 🛠️

    If you’re running local LLMs, you know Ollama is the go-to for getting models like Llama 3 and DeepSeek-R1 up and running with zero friction. This latest release candidate is a targeted update focused on stability for our Windows users!

    What’s new:

    • Windows Fix: The primary update in this release addresses an issue with dependency gathering on Windows systems. 🪟

    This is a great little patch to keep your local inference engine running smoothly without those pesky missing dependency errors during setup or updates. Keep tinkering! 🚀

    🔗 View Release

  • Ollama – v0.30.0-rc5

    Ollama – v0.30.0-rc5

    Ollama just dropped v0.30.0-rc5, and it’s looking like a vital stability patch for our local LLM setups! 🛠️

    If you haven’t tried Ollama yet, it is the ultimate toolkit for running powerful models like Llama 3, DeepSeek-R1, and Mistral directly on your own hardware. It handles all the heavy lifting of downloading and configuring models so you can focus on building and experimenting.

    What’s new in this release:

    • Windows Stability Fix: This update specifically targets Windows dependency issues. If you’ve been hitting roadblocks or runtime errors while trying to get things running on a PC, this patch is designed to smooth out those installation hiccups. 🪟

    It’s a release candidate, so it’s a focused effort to ensure the Windows experience is as seamless and friction-free as possible!

    🔗 View Release

  • Ollama – v0.30.0-rc4: ci: windows mlx tuning

    Ollama – v0.30.0-rc4: ci: windows mlx tuning

    Ollama v0.30.0-rc4 🦙

    If you’re looking to run large language models locally with ease, Ollama remains the gold standard for managing and interacting with LLMs on your own machine. This latest release candidate brings some much-needed optimization and fixes to the build process!

    What’s new in this release:

    • Windows MLX Tuning: Significant updates to the CI pipeline specifically focused on tuning MLX performance for Windows environments. If you’re running locally on Windows, expect smoother execution! 🛠️
    • Build Optimization: The team has shortened the “long-tail” during the build process, making the development and deployment cycle much snappier.
    • Installer Fix: A crucial fix to bring `OllamaSetup.exe` back under the 2GB limit. This ensures smoother downloads and prevents those pesky installation hurdles caused by file size limits.

    Great news for the Windows crowd looking to squeeze more performance out of their local setups! 🚀

    🔗 View Release

  • Ollama – v0.30.0-rc3

    Ollama – v0.30.0-rc3

    Ollama just dropped v0.30.0-rc3, and it looks like the team is hard at work smoothing out the edges for Windows users! 🛠️

    If you haven’t tried Ollama yet, it’s the ultimate framework for running powerful LLMs like Llama 3, DeepSeek-R1, and Mistral locally on your own machine. It’s a total game-changer for privacy-focused devs and anyone wanting to experiment with AI without worrying about API costs or limits.

    What’s new in this release candidate:

    • Windows ROCm Fix: The big highlight here is a specific fix for the Windows ROCm build. This is huge news for anyone trying to leverage AMD GPUs on Windows to accelerate their local model inference! 🚀
    • CI Improvements: The update includes much-needed continuous integration (CI) fixes to ensure more stable, reliable builds moving forward.

    This is a targeted release focused on stability and hardware compatibility, making sure your local AI setup stays buttery smooth!

    🔗 View Release

  • Ollama – v0.30.0-rc1

    Ollama – v0.30.0-rc1

    Ollama v0.30.0-rc1 🦬

    If you haven’t jumped on the Ollama train yet, now is the time! It’s the ultimate go-to tool for running powerful large language models like Llama 3, DeepSeek-R1, and Mistral locally on your machine with zero friction. It handles all the heavy lifting of model management and serving so you can focus on building cool stuff.

    This latest release candidate (rc1) is a focused stability update:

    • Windows MLX Build Fix: The team has pushed a fix specifically for the Windows MLX build process. If you’ve been experimenting with MLX-related workflows on Windows, this should smooth out those compilation hiccups! 🛠️

    It looks like a targeted patch to keep your local LLM engine running buttery smooth across different environments. Keep an eye out for more feature-heavy lifting in the upcoming full release!

    🔗 View Release

  • Ollama – v0.30.0-rc0

    Ollama – v0.30.0-rc0

    Ollama v0.30.0-rc0 is here! 🚀

    If you’ve been looking for a way to run heavy-hitting models like Llama 3, DeepSeek-R1, or Mistral locally without the headache of complex configurations, Ollama is your best friend. It handles all the heavy lifting of downloading and setting up LLMs right on your machine.

    This latest release candidate brings some exciting refinements to the ecosystem:

    • Enhanced Model Management: Improvements to how models are pulled and managed via the CLI, making your local library more stable. 🧠
    • Performance Optimizations: Under-the-hood tweaks aimed at smoother inference speeds when running quantized models on macOS, Windows, and Linux. ⚡
    • API Reliability: Refinements to the REST API to ensure smoother integration when you’re building your own AI-powered apps or agents. 🛠️

    Keep an eye on this release candidate as it paves the way for even more robust local LLM deployment! Happy tinkering! 🥔✨

    🔗 View Release

  • Ollama – v0.23.1: mlx: Gemma4 MTP speculative decoding (#15980)

    Ollama – v0.23.1: mlx: Gemma4 MTP speculative decoding (#15980)

    Ollama v0.23.1 is officially live, and it’s bringing some serious speed boosts for Apple Silicon fans! 🚀 If you’ve been looking to squeeze more tokens per second out of your local LLMs, this update is a massive win for performance.

    The star of the show is support for MTP (Multi-Token Prediction) speculative decoding specifically for the Gemma 4 model family using MLX. This means much faster inference speeds on Mac hardware!

    Here’s the breakdown of what’s new:

    • Gemma 4 Optimization: Full support for MTP speculative decoding is now active, significantly boosting generation speed.
    • New `DRAFT` Command: You can now use a new `DRAFT` instruction in your `Modelfile` to specify exactly which draft model to use for speculation.
    • Streamlined Model Creation: It’s now easier than ever to import `safetensors`-based Gemma 4 draft models directly via the `ollama create` command.
    • New Quantization Flag: The `ollama create` command now includes a `–quantize-draft` flag, making it simple to manage lightweight draft models.
    • Under-the-Hood Upgrades: Includes updated rotating cache support to handle MTP correctly and enhanced sampling support for better draft model token prediction.

    If you’re running on a Mac, definitely grab this update and start experimenting with those lightning-fast generations! 🛠️✨

    🔗 View Release

  • Ollama – v0.23.1-rc0

    Ollama – v0.23.1-rc0

    Ollama v0.23.1-rc0 🛠️

    If you’re running local LLMs, you know Ollama is the gold standard for getting models like Llama 3 and DeepSeek-R1 up and running with zero friction. This latest release candidate is a targeted stability update to keep your local environment running smoothly!

    What’s new:

    • CI Pipeline Fixes: The main focus of this release is addressing issues within the Continuous Integration (CI) pipeline, specifically regarding MLXAssets2.
    • Improved Mac Reliability: This patch ensures that the build process for Apple Silicon (MLX) assets remains stable. If you’re running optimized models on Mac hardware, this keeps those gears turning without a hitch! ⚙️

    It’s a small but important maintenance patch to ensure the ecosystem stays robust and reliable for all us local-first tinkerers.

    🔗 View Release

  • Heretic – v1.3.0

    Heretic – v1.3.0

    Heretic v1.3.0 is live! 🛠️

    If you’ve been looking for a way to strip “safety alignment” from your favorite LLMs without the headache of manual fine-tuning, this is the tool you need. Heretic uses directional ablation (abliteration) to identify and neutralize refusal mechanisms by analyzing residual activations. The result? A decensored model that keeps its original intelligence intact without needing a PhD or massive labeled datasets.

    What’s new in v1.3.0:

    Expanded Model Support & Features

    • New Models: You can now run ablation on the latest Qwen 3.5 and Gemma 4 models! 🤖
    • Integrated Benchmarking: A brand-new system is now built-in to help you measure refusal rates and model fidelity directly.
    • Auto Model Cards: If your local models have an existing README, Heretic can now automatically generate model cards for you.
    • Smarter Responses: Improved automatic response prefix determination via a new, fully configurable two-step process.

    Performance & Optimization

    • VRAM Efficiency: Significant reductions in peak VRAM usage and fixed reporting accuracy for multi-GPU setups—perfect for squeezing more out of your hardware! 🧠
    • Reproducibility: Much more robust reproducible runs, making it a breeze to debug or compare different ablation results.
    • Faster Startup: Improved startup speed when using the `–help` flag.

    Bug Fixes & Infrastructure

    • Fixed a division-by-zero error in the evaluator.
    • Resolved issues with displaying all abliterable components across layers.
    • Corrected `max_memory` setting examples and various minor infrastructure improvements.

    Whether you’re running an 8B model on an RTX 3090 (which takes about 45 minutes!) or experimenting with massive MoE architectures, this update makes the workflow smoother and more precise than ever. Happy tinkering! 🚀

    🔗 View Release