
Self-hosted AI platform with a ChatGPT-style interface for local and cloud LLMs
Open WebUI is an extensible, self-hosted AI platform that provides a polished ChatGPT-style interface for interacting with local and cloud-based large language models. It supports Ollama and any OpenAI-compatible API, and can operate entirely offline. With 45k+ GitHub stars, it offers multi-user support, conversation history, document uploads with RAG, image generation, web browsing within chats, voice and video calls, a model builder, native Python function calling, and a growing plugin ecosystem. Open WebUI is fully free and open-source, with optional enterprise licenses for compliance, white labeling, and dedicated support.
Connect to Ollama, OpenAI, and any OpenAI-compatible API — run local models or cloud providers from one interface
Upload documents to create Knowledge Collections and augment LLM responses with custom data using Retrieval Augmented Generation
Browse the web directly within chats to pull in real-time information for AI responses
Hands-free voice and video call features with multiple Speech-to-Text and Text-to-Speech engine support
Create custom Ollama models, characters, and agents directly through the web interface
Extensible architecture with native Python function calling, built-in code editor, and community plugins
Full authentication system with role-based access, conversation history, and per-user settings
Automatically renders HTML, CSS, and JS code snippets as interactive live previews within the chat