Deploying a Nebula App
Introduction
Nebula Graph is an open-source, distributed graph database optimized for low-latency graph queries at scale. Deploying Nebula with a Dockerfile on Klutch.sh provides reproducible builds, managed secrets, and persistent storage for your graph data—all configured from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample client usage, and production tips.
Prerequisites
- A Klutch.sh account (sign up)
- A GitHub repository containing your Nebula Dockerfile (GitHub is the only supported git source)
- Understanding that Nebula services (graphd, storaged, metad) run together in the standalone image
- TLS certificates if you plan to secure the endpoints
For onboarding, see the Quick Start.
Architecture and ports
- Nebula standalone exposes the graph service on internal port
9669(TCP) and an HTTP status port on19669. - Choose TCP traffic and set the internal port to
9669; clients connect viaexample-app.klutch.sh:8000(external TCP) mapped to internal9669. - Persistent storage is required for metadata and graph data.
Repository layout
nebula/├── Dockerfile # Must be at repo root for auto-detection└── README.mdKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Validate locally before pushing to GitHub:
docker build -t nebula-local .docker run -p 9669:9669 nebula-localDockerfile for Nebula (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM vesoft/nebula-graphd:v3.6.0
# Standalone entrypoint handles meta/graph/storage inside the containerEXPOSE 9669 19669CMD ["/usr/local/nebula/scripts/nebula.service", "start", "all", "&&", "tail", "-f", "/usr/local/nebula/logs/nebula-graphd.ERROR"]Notes:
- Pin the version (e.g.,
v3.6.x) for stability; update intentionally. - If you prefer the official
vesoft/nebula-graph-studiobundle or separate services, adjust the Dockerfile and ports accordingly.
Environment variables (Klutch.sh)
Set these in Klutch.sh before deploying:
- Optional tuning:
NEBULA_USER=,NEBULA_PASSWORD=if you enable auth, memory flags via--max_allowed_connections, etc. - Configure TLS via mounted certs and extra flags in the start command if required.
If you deploy without the Dockerfile and need Nixpacks overrides (not typical for Nebula):
NIXPACKS_START_CMD="/usr/local/nebula/scripts/nebula.service start all && tail -f /usr/local/nebula/logs/nebula-graphd.ERROR"
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/usr/local/nebula/data— graph and meta data./usr/local/nebula/logs— logs for troubleshooting.
Ensure these directories are writable.
Deploy Nebula on Klutch.sh (Dockerfile workflow)
- Push your repository—with the Dockerfile at the root—to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Select TCP traffic and set the internal port to
9669. - Add any environment variables or flags you use for auth or tuning.
- Attach persistent volumes for
/usr/local/nebula/data(and/usr/local/nebula/logsif desired) sized to fit your dataset and retention policy. - Deploy. Connect clients to
example-app.klutch.sh:8000(mapped to internal9669) using the Nebula clients or CLI.
Sample client usage (JavaScript)
import { createClient } from "@nebula-contrib/nebula-nodejs";
const client = createClient({ address: "example-app.klutch.sh:8000", user: "root", password: "nebula",});
await client.open();const res = await client.execute("SHOW SPACES");console.log(res);await client.close();Health checks and production tips
- Add a TCP probe on
9669or an HTTP probe tohttp://localhost:19669/statusif exposed. - Enforce TLS for sensitive deployments; store certs in volumes and pass the flags at start.
- Monitor disk usage on
/usr/local/nebula/dataand resize before it fills. - Pin image versions; back up data before upgrades and test in staging.
Nebula on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage, and flexible HTTP/TCP routing. With the Dockerfile at the repo root, TCP port 9669 configured, and data directories persisted, you can run high-performance graph workloads without extra YAML or workflow overhead.