Ollama – v0.22.0
Ollama v0.22.0 is officially here! π οΈ
If you’ve been looking for a way to run heavy-hitting LLMs like Llama 3, DeepSeek-R1, or Mistral directly on your own hardware without the cloud latency, Ollama remains the gold standard for local execution. It handles all the heavy lifting of model management and provides a slick REST API for your custom dev projects.
Whatβs new in this update:
- Enhanced Model Support: The library continues to expand, making it even easier to pull and run the latest open-source weights with zero configuration.
- Performance Optimizations: This release includes under-the-hood tweaks to the inference engine to ensure smoother token generation on both macOS and Linux.
- Improved CLI Workflow: Smoother management for downloading and switching between different model versions via the command line.
Whether you’re building a local RAG pipeline or just want a private chatbot that works offline, this update keeps your local ecosystem running at peak performance! π
