Personal infrastructure-as-code for a home/edge Kubernetes cluster, written in Python with Pulumi and managed per-project with the uv toolchain. Workloads are deployed as independent Pulumi projects; some lightweight services are deployed to Modal.
This README is a practical, end‑to‑end guide for getting from zero to a working stack.
- CLI:
pulumi,kubectl,helm,uv(and optionallymodal) - Python: 3.12 for Pulumi projects; 3.10+ for Modal apps
- Access:
pulumi loginto your backend; correct kube context set (kubectl config current-context)
Deploy any Pulumi project independently:
cd pulumi/<category>/<service>
uv sync
pulumi stack select <stack> # or: pulumi stack init <stack>
# set required config for the project
pulumi preview && pulumi upModal app deploy (optional):
cd modal/<service>
modal deploypulumi/— Pulumi projects by domaincore/— cluster/platform primitivesnetworking/— ingress + edge (Cloudflare Tunnel, Tailscale)operators/— operators (CloudNativePG)security/— security primitives (cert-manager, Vault)
ops/— monitoring/observability (kube‑prometheus‑stack)data/— data plane and servicesdatabases/— Postgres, CockroachDBanalytics/— ClickHouse, JupyterHub, Spark, Supersetstreaming/— Redpandaml/— Chromaworkflow/— n8n, Temporal
apps/— end‑user applicationsmedia/— Immichstitch/— Stitch app
modal/— Modal apps (e.g.,golink/)pulumi/lib/— small helper library used by projects- See
AGENTS.mdfor stack wiring patterns and runbooks
Use Pulumi stacks per environment (e.g., dev, prod). Each project has its own stack namespace and config. For cross‑stack wiring patterns and runbooks (CNPG → Postgres → Apps; ESC environment binding; Tailscale exposure), see AGENTS.md.
# Tailscale operator
cd pulumi/core/networking/tailscale
uv sync && pulumi stack select <stack>
pulumi config set tailscale:TS_CLIENT_ID <id>
pulumi config set --secret tailscale:TS_CLIENT_SECRET <secret>
pulumi up
# Cloudflare Tunnel ingress controller
cd pulumi/core/networking/cf-tunnel
uv sync && pulumi stack select <stack>
pulumi config set cloudflare-tunnel:cloudflareAccountId <id>
pulumi config set --secret cloudflare-tunnel:cloudflareTunnelApiToken <token>
pulumi up
# cert-manager
cd pulumi/core/security/cert-manager
uv sync && pulumi stack select <stack>
pulumi config set namespace cert-manager
pulumi up
# CloudNativePG operator
cd pulumi/core/operators/cnpg
uv sync && pulumi stack select <stack>
pulumi up
# Monitoring stack
cd pulumi/ops/monitoring
uv sync && pulumi stack select <stack>
pulumi up# PostgreSQL (CloudNativePG cluster)
cd pulumi/data/databases/postgres
uv sync && pulumi stack select <stack>
pulumi config set namespace postgresql
pulumi preview && pulumi up
# ClickHouse
cd pulumi/data/analytics/clickhouse
uv sync && pulumi stack select <stack>
pulumi preview && pulumi up# Immich (media)
cd pulumi/apps/media/immich
uv sync && pulumi stack select <stack>
pulumi config set namespace immich
# wire to Postgres via StackReference (required)
pulumi config set immich:postgres_stack <org>/postgresql/<stack>
# optional: pulumi config set immich:library_storage_size 500Gi
pulumi preview && pulumi up
# How Immich connects to Postgres
In‑cluster pods use the Kubernetes Service DNS. Local Pulumi providers use the
Tailscale hostname exported by the Postgres stack.
In‑cluster path (pod → service → CNPG):
[immich pod]
└── DB_HOSTNAME=postgresql-cluster-rw.<ns>.svc.cluster.local:5432
└── [CNPG cluster]
Local Pulumi path (dev machine → tailnet → CNPG):
[pulumi@mac]
└── host=ts_hostname (e.g., "postgresql")
└── [tailscale proxy pod]
└── [ClusterIP: postgresql-cluster-rw-ext]
└── [CNPG cluster]
Postgres extensions are managed in the Postgres stack. Example (mx):
cd pulumi/data/databases/postgres
uv sync && pulumi stack select mx
pulumi config set postgresql:app_databases '["immich"]'
pulumi config set postgresql:extensions '["vector","cube","earthdistance"]'
pulumi preview --diff && pulumi up
# n8n (workflow)
cd pulumi/data/workflow/n8n
uv sync && pulumi stack select <stack>
pulumi config set n8n:namespace n8n
pulumi config set --secret n8n:postgresPassword <password>
# optional: pulumi config set n8n:image n8nio/n8n:<tag>
pulumi preview && pulumi up
# Stitch (app)
cd pulumi/apps/stitch
uv sync && pulumi stack select <stack>
# set PORT, POSTGRES_HOST, k8s_namespace, TWITCH_CLIENT_ID, WEBHOOK_URL, etc.
pulumi preview && pulumi up- Python formatting and files:
.editorconfig(4‑space Python; 2‑space YAML) - Resource naming: hyphenated names; labels like
{"app": <name>} - Ingress classes: Tailscale (
tailscale) for internal, Cloudflare Tunnel (cloudflare-tunnel) for public - Readiness: prefer
depends_onandpulumi.com/waitForannotations for CRDs and stateful services - Secrets: always set via
pulumi config set --secretand consume from config or K8s Secret; never hard‑code
- Confirm kube context before applying:
kubectl config current-context - Always dry‑run:
pulumi preview - Stuck updates:
pulumi cancel(in another terminal) - Rollback: checkout a known‑good commit and
pulumi up - Scoped destroys: use
pulumi destroyon the specific project+stack only
- Namespaces missing: many projects create their own
Namespace; if a Helm chart installs into a new namespace, ensuredepends_onis set (this repo does so for critical stacks) - Ingress routing: verify ingress classes exist and are reconciling (
kubectl get ingressclass), checktailscale.com/*annotations when exposing services - Postgres consumers: prefer consuming outputs from the Postgres stack rather than re‑deriving values; update stacks if needed
- Conventional Commits (examples):
feat: add n8n deploymentrefactor: consolidate pulumi projects into category structure- Scoped prefixes (ok):
pulumi: deploy glance,modal: deploy golink
- PRs should capture:
- Summary and rationale
- Commands run and outcomes (paste
pulumi previewsummary) - Config/secrets touched and rollback notes