Text Generation Webui – v3.20
🎨 Image Generation is LIVE in Text-Generation-WebUI v3.20!
Now generate images right inside your LLM UI with `diffusers` — Z-Image-Turbo supported, 4bit/8bit quantized, `torch.compile` optimized, and PNGs auto-stash your generation params. Gallery? Check. Live progress bar? Yep. OpenAI-compatible image API? Absolutely 🤖✨
⚡ Faster text gen too!
`flash_attention_2` is now ON by default for Transformers models — smoother, quicker responses.
📦 Smaller Linux CUDA builds — download faster, run just as hard.
🔧 llama.cpp updated to latest (0a540f9) + ExLlamaV3 v0.0.17 for better inference stability and speed.
🖼️ Prompt magic upgrade!
Pass `bos_token` and `eos_token` directly into Jinja2 templates — perfect for Seed-OSS-36B-Instruct and similar models.
🚀 Portable builds now include:
- NVIDIA: `cuda12.4`
- AMD/Intel: `vulkan`
- CPU only: `cpu`
- Mac (Apple Silicon): `macos-arm64`
💾 Updating? Just replace the app — keep your `user_data/` folder and all your models, LoRAs, and settings intact.
Go make art. Or let the AI do it for you. 😎🖼️
