How-To Install Text-Generation-WebUI on Windows, Linux, Mac Updated and Latest Tutorial
AI Summary
The video is a comprehensive 2025 guide by Fahad Miraza on how to install and run a local Large Language Model (LLM) using Text Generation Web UI (also known as Ubabooga) on Linux, with instructions applicable to Windows and Mac as well. Key steps covered include setting up Python, Git, and CUDA environment, creating a virtual environment, installing prerequisites especially PyTorch corresponding to CUDA version, cloning the UI repo, and downloading models from Hugging Face including gated ones requiring token authentication. Fahad highlights his preferred backends such as VLLM, transformer, and llama.cpp, emphasizing offline usage with zero telemetry to protect privacy. The video also demonstrates serving the model locally via server.py script and interacting with it through localhost UI, loading different models without restarting. Additional tips include using multi-model chats, prompt templating, and file attachments. The tutorial is practical with advice on GPU renting and links to resources and sponsors provided. Fahad’s approach aims to simplify the installation given recent changes in the software stack, encouraging freedom to choose different methods and expressing confidence in the privacy and flexibility of this setup.