Text Generation Webui – v4.8

Text Generation Webui – v4.8

Text-generation-webui just got a major facelift! 🚀 If you’ve been looking for that “AUTOMATIC1111” experience for your local LLMs, v4.8 is a massive leap toward a true desktop-class application.

UI & UX Refinements

The chat composer has been redesigned with a taller input area and pinned action buttons—giving it a sleek, modern vibe similar to Gemini and DeepSeek. You’ll also notice smoother scroll animations when sending messages and extra breathing room for action buttons below your chat history.

Desktop & Electron Upgrades

  • True Desktop App: New portable builds are available! Just download, unzip, and double-click to run—no setup headache required. 🖥️
  • Window Persistence: The app now remembers your window size and maximize state between launches.
  • Web Mode: Added a `–no-electron` flag if you prefer using the web UI in your browser instead of the desktop window.
  • Bug Squashing: Fixed several Electron issues, including log coloring on Windows and broken speculative decoding caused by upstream `llama.cpp` changes.

Under the Hood & API

  • New Quant Support: The update includes `ik_llama.cpp`, a fork that introduces new quantization types for better efficiency. 🛠️
  • API Enhancements: Added support for list-format content within tool and assistant messages.
  • Dependency Updates: Both `llama.cpp` and `ik_llama.cpp` have been bumped to their latest versions.

Pro-Tip for Updates: 💡

Updating your portable install is now a breeze! Just extract the new version and move your `user_data` folder. Starting with version 4.0, you can even place `user_data` one level up (next to your install folder) so multiple versions of TextGen can share the same models and settings seamlessly.

🔗 View Release