Deploying a Memos App
Introduction
Memos is an open-source, self-hosted note-taking and knowledge-sharing app built in Go. Deploying Memos with a Dockerfile on Klutch.sh delivers reproducible builds, managed secrets, and persistent storage for your notes and media—all configured from klutch.sh/app. This guide covers installation, repository prep, a production-ready Dockerfile, deployment steps, Nixpacks overrides, sample API usage, and production tips.
Prerequisites
- A Klutch.sh account (sign up)
- A GitHub repository containing your Memos Dockerfile (GitHub is the only supported git source)
- Optional PostgreSQL database if you prefer it over the default SQLite storage
- Domain ready for your Memos instance
For onboarding, see the Quick Start.
Architecture and ports
- Memos serves HTTP on internal port
5230; choose HTTP traffic. - Storage is on disk by default (SQLite). PostgreSQL is supported if you set
MEMOS_DSN. - Persistent storage is required for the SQLite database and uploads.
Repository layout
memos/├── Dockerfile # Must be at repo root for auto-detection└── README.mdKeep secrets out of Git; store them in Klutch.sh environment variables.
Installation (local) and starter commands
Validate locally before pushing to GitHub:
docker build -t memos-local .docker run -p 5230:5230 -e MEMOS_MODE=prod -e MEMOS_PORT=5230 memos-localDockerfile for Memos (production-ready)
Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):
FROM neosmemo/memos:latest
ENV MEMOS_MODE=prod \ MEMOS_PORT=5230
EXPOSE 5230CMD ["./memos"]Notes:
- Pin the image tag (e.g.,
neosmemo/memos:0.22.x) for stability and upgrade intentionally. - Configure
MEMOS_DSNfor PostgreSQL if you prefer it over SQLite.
Environment variables (Klutch.sh)
Set these in Klutch.sh before deploying:
MEMOS_MODE=prodMEMOS_PORT=5230MEMOS_DSN=postgres://<user>:<password>@<host>:5432/<db>(optional; omit to use SQLite)MEMOS_DATA=/var/opt/memos(keeps data in the mounted volume)- Optional:
MEMOS_ADDR=0.0.0.0,MEMOS_METRICS=true,MEMOS_DISABLE_PASSWORD_LOGIN=false
If you deploy without the Dockerfile and need Nixpacks overrides (Go):
NIXPACKS_GO_VERSION=1.21NIXPACKS_START_CMD=./memos --mode prod --port 5230
Attach persistent volumes
In Klutch.sh storage settings, add mount paths and sizes (no names required):
/var/opt/memos— SQLite database, uploads, and configuration./var/log/memos— optional logs if you write them to disk.
Ensure these paths are writable inside the container.
Deploy Memos on Klutch.sh (Dockerfile workflow)
- Push your repository—with the Dockerfile at the root—to GitHub.
- Open klutch.sh/app, create a project, and add an app.
- Select HTTP traffic and set the internal port to
5230. - Add the environment variables above, including
MEMOS_DSNif using PostgreSQL, and any telemetry or auth flags you need. - Attach persistent volumes for
/var/opt/memos(and/var/log/memosif used) sized to fit your notes and media. - Deploy. Your Memos instance will be reachable at
https://example-app.klutch.sh; attach a custom domain if desired.
Sample API usage
Create a memo (replace the token with your API key):
curl -X POST "https://example-app.klutch.sh/api/v1/memo" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <token>" \ -d '{"content":"Hello from Memos on Klutch.sh!"}'List memos:
curl -X GET "https://example-app.klutch.sh/api/v1/memo" \ -H "Authorization: Bearer <token>"Health checks and production tips
- Add an HTTP probe to
/api/healthor/after enabling a simple status route. - Enforce HTTPS at the edge; forward internally to port
5230. - Keep the master token/API keys secret; rotate them in Klutch.sh.
- Monitor disk usage on
/var/opt/memosand resize before it fills. - Pin image versions and test upgrades in staging before production rollouts.
Memos on Klutch.sh combines reproducible Docker builds with managed secrets, persistent storage, and flexible HTTP/TCP routing. With the Dockerfile at the repo root, port 5230 configured, and your storage mounted, you can deliver a secure, self-hosted notes platform without extra YAML or workflow overhead.