Text Generation Webui – v4.5.2
Big news for all the local LLM enthusiasts! The project formerly known as text-generation-webui has officially been rebranded to TextGen! 🚀 Check out the new home at github.com/oobabooga/textgen.
The latest update (v4.5.2) is packed with stability improvements and critical fixes for those of us playing with the newest models:
- Gemma 4 Support: Major fixes for tool calling, handling special characters (like quotes and newlines), and improved rendering for thinking blocks in the UI.
- VRAM Optimization: A much-needed reduction in VRAM peak usage during prompt logprobs forward passes—perfect for squeezing more performance out of your GPU. 🧠
- UI Refinements: Added a sky-blue color for quoted text in light mode and improved logits display.
- Bug Squashing: Fixed various issues including chat scroll freezing, tool icon shrinking, and BOS/EOS token overwriting for GGUF models.
- Dependency Updates: Fresh updates for both `llama.cpp` and `ik_llama.cpp` (the fork with those sweet new quant types).
Pro-tip for the tinkerers: If you use the portable builds, updating is a breeze! Just extract the new version and swap your `user_data` folder. You can even move `user_data` one level up to share it between different installation folders. Happy generating! 🛠️
