Deploying an OpenWebUI App
Introduction
OpenWebUI is an open-source web interface for local and remote LLM providers (including Ollama). Deploying OpenWebUI with a Dockerfile on Klutch.sh provides reproducible builds, managed secrets, and persistent storage for user data and logs—all configured from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample API usage, and production tips.
Prerequisites
- A Klutch.sh account (sign up)
- A GitHub repository containing your Dockerfile (GitHub is the only supported git source)
- Access to your chosen LLM backend (e.g., Ollama API URL, OpenAI-compatible endpoint)
- Storage sizing for user data and logs
For onboarding, see the Quick Start.
Architecture and ports
- OpenWebUI serves HTTP on internal port
8080; choose HTTP traffic. - Persistent storage is required for user data and configuration.
Repository layout
openwebui/├── Dockerfile # Must be at repo root for auto-detection└── README.mdKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Validate locally before pushing to GitHub:
docker build -t openwebui-local .docker run -p 8080:8080 \ -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \ openwebui-localDockerfile for OpenWebUI (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM ghcr.io/open-webui/open-webui:main
ENV PORT=8080 \ OLLAMA_BASE_URL=http://localhost:11434
EXPOSE 8080CMD ["bash", "-lc", "python -m open_webui.main --host 0.0.0.0 --port ${PORT}"]Notes:
- Pin the image tag (e.g.,
ghcr.io/open-webui/open-webui:v0.2.x) for stability and upgrade intentionally. - Set
OLLAMA_BASE_URLor another provider URL to point to your backend.
Environment variables (Klutch.sh)
Set these in Klutch.sh before deploying:
PORT=8080OLLAMA_BASE_URL=http://<ollama-host>:11434(or your provider endpoint)- Optional auth and security:
WEBUI_SECRET_KEY=<secure-random>,WEBUI_AUTH_ENABLED=true - Optional model/provider keys as required by your setup
If you deploy without the Dockerfile and need Nixpacks overrides:
NIXPACKS_START_CMD=python -m open_webui.main --host 0.0.0.0 --port 8080
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/app/backend/data— user data, settings, and cache./var/log/openwebui— optional logs if stored on disk.
Ensure these paths are writable inside the container.
Deploy OpenWebUI on Klutch.sh (Dockerfile workflow)
- Push your repository—with the Dockerfile at the root—to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Select HTTP traffic and set the internal port to
8080. - Add the environment variables above, including your provider endpoint and any auth keys.
- Attach persistent volumes for
/app/backend/data(and/var/log/openwebuiif used), sizing them for user data and logs. - Deploy. Your OpenWebUI instance will be reachable at
https://example-app.klutch.sh; attach a custom domain if desired.
Sample API usage
Send a chat completion request (Ollama-compatible example):
curl -X POST "https://example-app.klutch.sh/api/chat" \ -H "Content-Type: application/json" \ -d '{"model":"llama3","messages":[{"role":"user","content":"Hello from OpenWebUI on Klutch.sh"}]}'Health checks and production tips
- Add an HTTP probe to
/or/api/healthfor readiness. - Enforce HTTPS at the edge; forward internally to port
8080. - Keep provider keys and secret keys in Klutch.sh secrets; rotate them regularly.
- Monitor storage usage on
/app/backend/data; resize before it fills. - Pin image versions and test upgrades in staging; back up data before updates.
OpenWebUI on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage, and flexible HTTP/TCP routing. With the Dockerfile at the repo root, port 8080 configured, and your provider URL set, you can deliver a self-hosted LLM UI without extra YAML or workflow overhead.