Deploying a Langflow App
Introduction
Langflow is an open-source UI for building and deploying LLM-driven workflows. Deploying Langflow with a Dockerfile on Klutch.sh provides reproducible builds, managed secrets, and persistent storage for projects and vector data—all accessible from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample usage, and production tips.
Prerequisites
- A Klutch.sh account (create one)
- A GitHub repository containing your Langflow code/config (GitHub is the only supported git source)
- Docker familiarity and Python 3.10+ knowledge
- API keys for LLM providers (OpenAI, etc.) as required by your flows
- Optional: Database or vector store credentials if you persist embeddings externally
For onboarding, see the Quick Start.
Architecture and ports
- Langflow serves HTTP; set the internal container port to
7860. - If you connect to databases or vector stores, deploy them as separate Klutch.sh TCP apps exposed on port
8000and connect on their native ports. - Persistent storage is recommended for
data(projects, uploads) and optional for logs/cache.
Repository layout
langflow/├── Dockerfile # Must be at repo root for auto-detection├── requirements.txt├── start.sh # Optional helper├── data/ # Projects/uploads (mount as volume)└── .env.example # Template only; no secretsKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Install dependencies and run locally before pushing to GitHub:
python -m venv .venvsource .venv/bin/activatepip install -r requirements.txtlangflow run --host 0.0.0.0 --port 7860Optional start.sh for portability and Nixpacks fallback:
#!/usr/bin/env bashset -euo pipefailexec langflow run --host 0.0.0.0 --port 7860Make it executable with chmod +x start.sh.
Dockerfile for Langflow (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y build-essential git && rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/requirements.txtRUN pip install --no-cache-dir -r requirements.txt
COPY . /app
ENV PORT=7860
EXPOSE 7860CMD ["langflow", "run", "--host", "0.0.0.0", "--port", "7860"]Notes:
- Add extra system packages if your flows need them (e.g.,
ffmpeg,poppler). - Pin Python package versions in
requirements.txtfor reproducibility.
Environment variables (Klutch.sh)
Set these in the Klutch.sh app settings (Secrets tab) before deploying:
PORT=7860LANGFLOW_HOST=https://example-app.klutch.shOPENAI_API_KEY(or other LLM provider keys)LANGFLOW_DATA_DIR=/app/data- Any DB/vector store connection strings your flows require
If you deploy without the Dockerfile and need Nixpacks overrides:
NIXPACKS_BUILD_CMD=pip install -r requirements.txtNIXPACKS_START_CMD=langflow run --host 0.0.0.0 --port 7860NIXPACKS_PYTHON_VERSION=3.11
These keep Langflow compatible with Nixpacks defaults when a Dockerfile is absent.
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/app/data— required for projects, flows, and uploads./app/logs— optional if you store logs on disk.
Ensure these paths are writable inside the container.
Deploy Langflow on Klutch.sh (Dockerfile workflow)
- Push your repository (with the Dockerfile at the root) to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Connect the GitHub repository; Klutch.sh automatically detects the Dockerfile.
- Choose HTTP traffic for Langflow.
- Set the internal port to
7860. - Add the environment variables above (LLM keys, data dir, and any
NIXPACKS_*overrides if you temporarily deploy without the Dockerfile). - Attach persistent volumes for
/app/data(and/app/logsif used), selecting sizes that fit your projects and logs. - Deploy. Your Langflow UI will be reachable at
https://example-app.klutch.sh; attach a custom domain if desired.
Sample usage
Trigger a flow via the API (replace placeholders):
curl -X POST "https://example-app.klutch.sh/api/v1/flows/<flow-id>/run" \ -H "Content-Type: application/json" \ -d '{"inputs": {"question": "What is Langflow?"}}'Health checks and production tips
- Add a reverse proxy probe to
/or a lightweight status route. - Enforce HTTPS at the edge; forward HTTP to port 7860 internally.
- Keep secrets in Klutch.sh variables; rotate API keys regularly.
- Monitor disk usage on
/app/data; resize volumes before they fill. - Pin dependency versions and test upgrades to avoid runtime surprises.
Langflow on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage, and flexible HTTP/TCP routing. With the Dockerfile at the repo root and port 7860 configured, you can build and serve LLM workflows without extra YAML or workflow overhead.