Skip to content

Deploying K3s

Introduction

K3s is a lightweight Kubernetes distribution built for edge and resource-constrained environments. Deploying K3s with a Dockerfile on Klutch.sh lets you preconfigure the single-node control plane and agents for reproducible, containerized clusters managed from klutch.sh/app. This guide walks through repository prep, a production-ready Dockerfile, deployment steps, environment variables, persistent storage rules, and Nixpacks overrides to launch a self-contained K3s node.


Prerequisites

  • A Klutch.sh account (create one)
  • A GitHub repository containing your K3s configuration (GitHub is the only supported git source)
  • Docker and Go familiarity for building custom agents or controllers
  • Persistent storage for /var/lib/rancher/k3s
  • Optional: Private registries or cloud provider credentials used by your workloads

Review the Quick Start for repo setup.


Architecture and ports

  • K3s exposes the Kubernetes API on port 6443; set the internal container port to 6443 in Klutch.sh.
  • Additional services (traefik, metrics, etc.) expose internal ports (80, 443) but are handled within the cluster.
  • Persistent storage must be attached for /var/lib/rancher/k3s to survive restarts and store etcd data.
  • Manage optional node communication via TCP apps (external port 8000) if you need agent connectivity from outside.

Repository layout

k3s/
├── config/ # rke2.yaml or k3s config files
├── scripts/ # helper scripts (install, health checks)
├── Dockerfile # Must be at repo root for auto-detection
├── go.mod # Optional if you build Go binaries
├── go.sum
└── .env.example # Template only; no secrets

Keep secrets in Klutch.sh secrets; do not commit them.


Installation (local) and starter commands

Test locally with the upstream installer before publishing:

Terminal window
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san <your-domain>" sh -s - server \
--write-kubeconfig-mode 644
kubectl get nodes

Optional start.sh for portability and Nixpacks fallback:

#!/usr/bin/env bash
set -euo pipefail
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --write-kubeconfig-mode 644 --node-ip=${NODE_IP:-127.0.0.1}" sh -s - server
exec tail -f /var/log/k3s/k3s.log

Make it executable with chmod +x start.sh.


Dockerfile for K3s (production-ready)

Place this Dockerfile at the repo root; Klutch.sh auto-detects it (no Docker selection in the UI):

FROM rancher/k3s:v1.30.2-k3s1
ENV K3S_KUBECONFIG_OUTPUT=/output/kubeconfig
ENV K3S_KUBECONFIG_MODE=644
VOLUME ["/var/lib/rancher/k3s", "/output"]
COPY scripts/start.sh /usr/local/bin/start-k3s.sh
RUN chmod +x /usr/local/bin/start-k3s.sh
EXPOSE 6443
CMD ["/usr/local/bin/start-k3s.sh"]

Notes:

  • Keep the k3s tag pinned to avoid unexpected upgrades.
  • Use /output or other volumes to expose kubeconfig to sidecars or tooling.

Environment variables (Klutch.sh)

Configure the following in Klutch.sh app settings (Secrets tab):

  • K3S_NODE_NAME (optional friendly node name)
  • K3S_KUBECONFIG_OUTPUT=/output/kubeconfig
  • K3S_KUBECONFIG_MODE=644
  • K3S_URL=https://example-app.klutch.sh:6443 (if connecting agents)
  • INSTALL_K3S_EXEC=server --node-taint CriticalAddonsOnly=true:NoExecute --flannel-backend=none
  • NODE_IP=<internal ip>
  • K3S_TOKEN=<secure-token> (if joining agents)

If you customize Nixpacks:

  • NIXPACKS_BUILD_CMD="echo k3s image is prebuilt"
  • NIXPACKS_START_CMD=/usr/local/bin/start-k3s.sh
  • NIXPACKS_GO_VERSION=1.21

These keep K3s compatible with Klutch.sh builds when a Dockerfile is absent.


Attach persistent volumes

Add the following mount paths (size specified in Klutch.sh UI):

  • /var/lib/rancher/k3s — required for etcd/SQLite data.
  • /output — optional, hold kubeconfig or logs.

Klutch.sh handles mount path and size only; no names are required.


Deploy K3s on Klutch.sh (Dockerfile workflow)

  1. Push your repository (with the Dockerfile at the root) to GitHub.
  2. Open klutch.sh/app, create a project, and add an app.
  1. Connect the GitHub repository; Klutch.sh automatically detects the Dockerfile.
  2. Choose HTTP for the control plane API and set the internal port to 6443.
  3. Add the environment variables above (kubeconfig paths, install flags, tokens, and any NIXPACKS_* overrides .
  4. Attach persistent volumes for /var/lib/rancher/k3s (and /output if needed), selecting sizes that fit your stateful workloads.
  5. Deploy. The Kubernetes API is reachable at https://example-app.klutch.sh; configure kubeconfig to use that endpoint.

If you require inbound TCP traffic (e.g., node agents or Helm), create a separate Klutch.sh TCP app with internal port 5640 or whichever port your workload exposes and use the example-app.klutch.sh:8000 address.


Health checks and production tips

  • Monitor /var/log/k3s/k3s.log and use a sidecar to forward logs to your observability stack.
  • Rotate K3S_TOKEN and credentials regularly.
  • Back up /var/lib/rancher/k3s before upgrades.
  • Enforce HTTPS at the edge; Klutch.sh already routes HTTPS to your app.
  • Keep manifests and controllers in source control for reproducibility.

K3s on Klutch.sh delivers a reproducible single-node Kubernetes control plane with managed secrets, persistence for cluster state, and flexible HTTP/TCP routing. With the Dockerfile at the repo root and port 6443 configured, you can bootstrap lightweight clusters without needing extra YAML or workflow overhead.