Deploying a Mage AI App
Introduction
Mage AI is an open-source data pipeline tool for building, running, and monitoring ETL/ELT workflows. Deploying Mage AI with a Dockerfile on Klutch.sh provides reproducible builds, managed secrets, and persistent storage for projects and logs—all configured from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample usage, and production tips.
Prerequisites
- A Klutch.sh account (create one)
- A GitHub repository containing your Mage AI code/config (GitHub is the only supported git source)
- Docker familiarity and Python 3.10+ knowledge
- PostgreSQL credentials (if you use an external metadata DB) or plan to use the default SQLite
- Storage for project files, logs, and data
For onboarding, see the Quick Start.
Architecture and ports
- Mage AI serves HTTP; set the internal container port to
6789(Mage UI default). - If you use PostgreSQL, deploy it as a separate Klutch.sh TCP app exposed on port
8000, connecting internally on5432. - Persistent storage is recommended for the Mage project folder and logs.
Repository layout
mage-ai/├── Dockerfile # Must be at repo root for auto-detection├── requirements.txt├── start.sh # Optional helper├── mage_data/ # Project data/logs (mount as volume)└── .env.example # Template only; no secretsKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Install dependencies and run locally before pushing to GitHub:
python -m venv .venvsource .venv/bin/activatepip install -r requirements.txtmage start mage_aiOptional start.sh for portability and Nixpacks fallback:
#!/usr/bin/env bashset -euo pipefailexec mage start mage_ai --port 6789 --host 0.0.0.0Make it executable with chmod +x start.sh.
Dockerfile for Mage AI (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y build-essential git && rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/requirements.txtRUN pip install --no-cache-dir -r requirements.txt
COPY . /app
ENV PORT=6789
EXPOSE 6789CMD ["mage", "start", "mage_ai", "--port", "6789", "--host", "0.0.0.0"]Notes:
- Add any system dependencies your pipelines require (e.g.,
libpq-devfor Postgres,ffmpegfor media processing). - Keep
mage_data/writable and mount it as a volume for pipeline state and logs.
Environment variables (Klutch.sh)
Set these in the Klutch.sh app settings (Secrets tab) before deploying:
PORT=6789MAGE_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db>(if using Postgres)MAGE_SECRET_KEY=<secure-key>- Any pipeline-specific credentials (AWS/GCP keys, API tokens)
If you deploy without the Dockerfile and need Nixpacks overrides:
NIXPACKS_BUILD_CMD=pip install -r requirements.txtNIXPACKS_START_CMD=mage start mage_ai --port 6789 --host 0.0.0.0NIXPACKS_PYTHON_VERSION=3.11
These keep Mage AI compatible with Nixpacks defaults when a Dockerfile is absent.
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/app/mage_data— projects, logs, and pipeline state./app/.cache— optional cache if your pipelines write there.
Ensure these paths are writable inside the container.
Deploy Mage AI on Klutch.sh (Dockerfile workflow)
- Push your repository (with the Dockerfile at the root) to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Connect the GitHub repository; Klutch.sh automatically detects the Dockerfile.
- Choose HTTP traffic for Mage AI.
- Set the internal port to
6789. - Add the environment variables above (database URL if used, secret key, pipeline creds, and any
NIXPACKS_*overrides if you temporarily deploy without the Dockerfile). - Attach persistent volumes for
/app/mage_data(and/app/.cacheif used), selecting sizes that match your data and logging needs. - Deploy. Your Mage AI UI will be reachable at
https://example-app.klutch.sh; attach a custom domain if desired.
Sample API usage
Trigger a pipeline run via HTTP (replace placeholders):
curl -X POST "https://example-app.klutch.sh/api/pipelines/<pipeline_uuid>/trigger" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <api-token>" \ -d '{"pipeline_run": {"variables": {"example_var": "hello"}}}'Health checks and production tips
- Add a reverse proxy probe to
/or a lightweight health route if available. - Enforce HTTPS at the edge; forward HTTP to port 6789 internally.
- Keep dependencies pinned in
requirements.txtand test upgrades. - Monitor disk usage on
/app/mage_dataand resize volumes before they fill. - Back up PostgreSQL (if used) and project data regularly; do not rely on container storage alone.
- Rotate secrets and API tokens regularly; store them only in Klutch.sh secrets.
Mage AI on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage for pipeline state, and flexible HTTP/TCP routing. With the Dockerfile at the repo root and port 6789 configured, you can run reliable data pipelines without extra YAML or workflow overhead.