Deploying a Parseable App
Introduction
Parseable is an open-source log aggregation server with real-time search and dashboards. Deploying Parseable with a Dockerfile on Klutch.sh provides reproducible builds, managed secrets, and persistent storage for logs—all configured from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample ingestion calls, and production tips.
Prerequisites
- A Klutch.sh account (sign up)
- A GitHub repository containing your Parseable Dockerfile (GitHub is the only supported git source)
- Storage sizing for log data and retention
- Optional object storage credentials if you offload logs
For onboarding, see the Quick Start.
Architecture and ports
- Parseable serves HTTP on internal port
8000; choose HTTP traffic. - Persistent storage is required for log data and configs.
Repository layout
parseable/├── Dockerfile # Must be at repo root for auto-detection└── README.mdKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Validate locally before pushing to GitHub:
docker build -t parseable-local .docker run -p 8000:8000 parseable-localDockerfile for Parseable (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM parseable/parseable:latest
ENV PORT=8000
EXPOSE 8000CMD ["parseable", "server", "--http-port", "8000"]Notes:
- Pin the image tag (e.g.,
parseable/parseable:v1.x) for stability; update intentionally. - Configure TLS/ingestion keys via environment variables (see below).
Environment variables (Klutch.sh)
Set these in Klutch.sh before deploying:
PORT=8000P_ADMIN_USER=<admin-user>P_ADMIN_PASSWORD=<strong-password>- Optional storage:
P_STORAGE_TYPE=local(default) or S3 settings (P_STORAGE_TYPE=s3,P_S3_ENDPOINT,P_S3_REGION,P_S3_BUCKET,P_S3_ACCESS_KEY,P_S3_SECRET_KEY) - Optional TLS:
P_ENABLE_TLS=true,P_TLS_CERT_FILE,P_TLS_KEY_FILE(ensure certs are mounted)
If you deploy without the Dockerfile and need Nixpacks overrides:
NIXPACKS_START_CMD=parseable server --http-port 8000
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/var/lib/parseable— log data and state./var/log/parseable— optional logs if stored on disk./etc/parseable/certs— TLS certs if you enable TLS.
Ensure these paths are writable (data/logs) and readable (certs) inside the container.
Deploy Parseable on Klutch.sh (Dockerfile workflow)
- Push your repository—with the Dockerfile at the root—to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Select HTTP traffic and set the internal port to
8000. - Add the environment variables above, including admin credentials and storage/TLS settings.
- Attach persistent volumes for
/var/lib/parseable(and optional/var/log/parseableand/etc/parseable/certs) sized for your log retention. - Deploy. Your Parseable UI and ingestion endpoint will be reachable at
https://example-app.klutch.sh.
Sample ingestion and query
Ingest logs:
curl -X POST "https://example-app.klutch.sh/api/v1/logs" \ -u "<admin-user>:<password>" \ -H "Content-Type: application/json" \ -d '{"timestamp":"2024-01-01T00:00:00Z","level":"info","message":"Hello from Parseable on Klutch.sh"}'Query logs (example):
curl -X GET "https://example-app.klutch.sh/api/v1/query?level=info" \ -u "<admin-user>:<password>"Health checks and production tips
- Add an HTTP probe to
/or/healthz(if enabled) for readiness. - Enforce HTTPS at the edge; forward internally to port
8000. - Keep admin credentials and storage keys in Klutch.sh secrets; rotate regularly.
- Monitor disk usage on
/var/lib/parseable; resize before it fills. - Pin image versions and test upgrades in staging; back up data if using local storage.
Parseable on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage, and flexible HTTP/TCP routing. With the Dockerfile at the repo root, port 8000 configured, and storage mounted, you can deliver real-time log aggregation without extra YAML or workflow overhead.