Ollama – v0.14.3-rc3: model: add lfm2 architecture and LFM2.5-1.2B-Thinking support (#13792)

Ollama – v0.14.3-rc3: model: add lfm2 architecture and LFM2.5-1.2B-Thinking support (#13792)

Big news for AI tinkerers! 🚀

Ollama v0.14.3-rc3 just dropped with native support for the brand-new LFM2 architecture and its first model: LFM2.5-1.2B-Thinking — a lean 1.2B parameter model built for reasoning, not just generation.

🧠 Think step-by-step problem solving, code reasoning, and complex QA — all running locally with zero cloud latency.

Pull it in seconds:

`ollama pull lfm2.5:1.2b-thinking`

No more waiting for APIs — now you’ve got a tiny, thinking LLM on your machine. Perfect for dev experiments, edge deployments, or just geeking out in privacy.

#Ollama #LLMs #LocalAI #LFM2

🔗 View Release