In 2023 I noticed I was paying for three separate PaaS platforms to host five small applications. Render, Fly, Railway — I’d signed up for each when it seemed convenient, and now I was spending close to $200/month for apps that used maybe 2GB of RAM combined.

A Hetzner VPS with 8GB RAM costs €13/month.1 Hetzner’s CPX31 (4 vCPU, 8GB RAM) runs €13.10/month as of 2026. Their CPX21 (3 vCPU, 4GB RAM) is €7.55/month. Compare to Render’s $7/service/month or Fly.io’s usage-based pricing. For side projects and indie apps, the economics are hard to argue with.

I’d been avoiding VPS deployment because the last time I tried it (years ago, with Capistrano and Nginx configs), it was tedious. But I kept hearing about Kamal, and eventually I tried it.


Kamal

Kamal is DHH’s deployment tool for getting containers onto servers without Kubernetes.2 DHH built Kamal while 37signals was leaving the cloud. They claim they’ll save $7M over five years by running their own hardware. Kamal came out of that work — it’s what they use to deploy Basecamp, HEY, and everything else. You write a YAML file describing your app, your servers, your registry. Then you run kamal setup once and kamal deploy thereafter.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
service: myapp
image: registry.example.com/myapp

proxy:
  host: myapp.com
  ssl: true
  app_port: 3000

servers:
  web:
    hosts:
      - 100.70.90.101

registry:
  server: registry.example.com
  username: deploy
  password:
    - KAMAL_REGISTRY_PASSWORD

accessories:
  postgres:
    image: postgres:16
    host: 100.70.90.101
    port: 5432
    env:
      secret:
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data

Kamal builds your Docker image, pushes it to the registry, SSHes into your server, pulls the image, starts a new container, waits for the healthcheck to pass, then kills the old container. Zero-downtime deploys without me having to think about blue-green routing.

The thing that makes this work is kamal-proxy, a small Go binary that runs on your server and handles request routing, SSL via Let’s Encrypt, and health checks.3 kamal-proxy replaced Traefik in Kamal 2.0. It’s purpose-built for Kamal’s workflow — blocking deploys until healthy, clear error messages, simpler configuration. All containers run in a Docker network called kamal so they have stable hostnames the proxy can route to. Each app registers its hostname with the proxy, and incoming requests get routed to the right container.

Why not Docker Compose?

I tried Compose for production once. You can make it work, but you end up writing scripts for everything Kamal does automatically:

  • Zero-downtime deploys — Compose just replaces containers. If you want blue-green, you write it yourself.
  • Rollbacks — Kamal tags every deploy with a git SHA. kamal rollback goes back to any previous version. Compose has no concept of “what did I deploy before.”
  • Health checks that gate traffic — Kamal’s proxy waits for 200 from your health endpoint before switching traffic. With Compose, a failing healthcheck just restarts the container forever.
  • Secrets — Kamal has a .kamal/secrets file that can pull from 1Password or Bitwarden at deploy time. Compose expects you to figure it out.

Compose is a container orchestrator. Kamal is a deployment system. The difference matters.

My setup

One Hetzner CPX31 runs:

(This site, shuvro.io, is on Cloudflare Pages — static sites don’t need Kamal.)

Total cost: €13.10/month plus €4 for automated backups. I was paying 10x this for less.

The setup looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
                    ┌─────────────────────────────────────────┐
                    │            Hetzner VPS                  │
                    │                                         │
 repoengine.com ────┼──► kamal-proxy ──► repoengine:3000     │
                    │         │                               │
  sparrow.so ───────┼─────────┴────────► sparrow_studio:8000 │
                    │                                         │
                    │    postgres ◄──── (shared)              │
                    │    redis    ◄──── (shared)              │
                    └─────────────────────────────────────────┘

Each app has its own repository with its own config/deploy.yml. They all target the same server IP. kamal-proxy routes based on the Host header.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# In repoengine's deploy.yml
proxy:
  host: repoengine.com
  ssl: true
  app_port: 3000

servers:
  web:
    hosts:
      - 100.70.90.101

Databases as accessories

Kamal calls long-lived containers “accessories.” My Postgres config:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
accessories:
  postgres:
    image: postgres:16
    host: 100.70.90.101
    port: "127.0.0.1:5432:5432"
    env:
      secret:
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data

The 127.0.0.1 binding means Postgres only accepts connections from localhost. Containers on the server can reach it; the internet cannot.

Multiple apps share one Postgres instance with separate databases:

1
2
kamal accessory exec postgres --reuse -- psql -U postgres -c "CREATE DATABASE repoengine_production;"
kamal accessory exec postgres --reuse -- psql -U postgres -c "CREATE DATABASE sparrow_production;"

Is running Postgres on the same VPS as your apps a good idea? Depends.4 If your data is irreplaceable and you don’t have good backups, use managed Postgres (Hetzner has it, or RDS, or Supabase). If you’re an indie dev with backups and can tolerate some risk, self-hosted is fine. I run both patterns depending on the project. For my side projects with daily backups, it’s fine. For something mission-critical, I’d use managed Postgres.

The management problem

Here’s the thing that annoyed me: managing multiple apps means a lot of context-switching.

On a typical day I might need to check if RepoEngine deployed successfully, look at Celery logs because a task is failing, see why Sparrow’s healthcheck is timing out, restart Postgres after a config change. Each operation requires me to cd into the right project directory, remember which kamal command to run, maybe SSH into the server to check container state.

None of this is hard. But it adds up. I found myself putting off deployments because I didn’t want to deal with the friction.

Building lazykamal

I like lazydocker. It’s a TUI that shows all your Docker containers and lets you interact with them without typing commands. I wanted the same thing for Kamal.

So I built it.5 TUI = Terminal User Interface. Applications with graphical interfaces that run in a terminal: htop, vim, lazydocker. TUIs are popular with developers who live in terminals because they provide visual feedback without context-switching to a browser.

Project Mode runs from a directory with config/deploy.yml. You see your destinations (production, staging), their live status, and a menu of every Kamal command organized by category. Arrow keys to navigate, Enter to execute, Esc to go back.

Server Mode is what I use most. You give it a server address:

1
lazykamal --server 100.70.90.101

It SSHes in, queries Docker for all containers with Kamal labels, and groups them by app. No Kamal needed on the server — it works purely through SSH and Docker.

The grouping logic figures out that if myapp and myapp-sidekiq both exist, then myapp-sidekiq belongs to myapp. It handles any naming pattern that follows Kamal’s convention:

1
2
3
4
5
6
7
8
9
● repoengine (production)
├─ Web: 1/1 containers
├─ postgres: 1 container
├─ redis: 1 container
└─ celery: 1 container

● sparrow_studio (production)
├─ Web: 1/1 containers
└─ postgres: 1 container

I run Tailscale on my machines, so I can just do lazykamal -s repoengine-vps and see everything regardless of which local directory I’m in.

Why not a web dashboard?

Several Kamal dashboard projects exist. I didn’t want a web interface.

The terminal is where I work. Opening a browser to check deployment status feels like leaving my desk to check a bulletin board in another room. A TUI sits in a tmux pane alongside my editor.6 My setup: tmux with panes for neovim, test runner, and lazykamal. When I deploy, I glance at the status panel. If logs look wrong, I drill in. It’s ambient, not a destination.

There’s also a security argument. Lazykamal runs locally and connects via SSH. No web server running on my VPS with a management interface exposed to the internet.

Practical guide

If you want to replicate this:

1. Get a VPS

I use Hetzner (CPX21 or CPX31), Ubuntu 24.04, SSH key auth. Any VPS works.

2. Secure it

1
2
3
4
5
6
7
8
9
# Disable password auth
sudo sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo systemctl restart ssh

# Firewall
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

3. Create a deploy user

1
2
sudo adduser deploy
sudo usermod -aG docker deploy

Add your SSH key to /home/deploy/.ssh/authorized_keys.

4. First app

In your app’s repo, create config/deploy.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
service: myapp
image: ghcr.io/yourname/myapp

proxy:
  host: myapp.com
  ssl: true
  app_port: 3000

servers:
  web:
    hosts:
      - YOUR_VPS_IP

registry:
  server: ghcr.io
  username: yourname
  password:
    - KAMAL_REGISTRY_PASSWORD

ssh:
  user: deploy

env:
  clear:
    RAILS_ENV: production
  secret:
    - RAILS_MASTER_KEY
    - DATABASE_URL

Create .kamal/secrets:

1
2
3
KAMAL_REGISTRY_PASSWORD=ghp_your_github_token
RAILS_MASTER_KEY=your_master_key
DATABASE_URL=postgres://postgres:password@myapp-postgres:5432/myapp

Then:

1
2
kamal setup   # First time: installs Docker, kamal-proxy
kamal deploy  # Builds, pushes, deploys

5. Second app

Same thing, different repo, different service name, different host:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
service: secondapp
image: ghcr.io/yourname/secondapp

proxy:
  host: secondapp.com
  ssl: true
  app_port: 8000

servers:
  web:
    hosts:
      - YOUR_VPS_IP  # Same server

registry:
  server: ghcr.io
  username: yourname
  password:
    - KAMAL_REGISTRY_PASSWORD

ssh:
  user: deploy

Run kamal deploy. kamal-proxy picks up the new app automatically.

6. Use lazykamal

1
2
3
brew tap shuvro/lazykamal https://github.com/shuvro/homebrew-lazykamal
brew install lazykamal
lazykamal --server deploy@YOUR_VPS_IP

Or go install github.com/shuvro/lazykamal@latest.

Resource planning

My 8GB VPS allocation:

  • 2GB for Postgres
  • 512MB for Redis
  • ~1.5GB per web app (including Celery workers)
  • 2GB headroom for OS, bursts, and proxy

RAM is usually the constraint. If you’re running out, upgrade the VPS or split apps across servers. Kamal handles multi-host deploys — just list multiple IPs under hosts.

Mistakes I made

Secrets management was a mess. I had .kamal/secrets files everywhere with inconsistent variable names. Now I use 1Password with Kamal’s op CLI integration: SECRET_KEY=$(op read op://Production/MyApp/SECRET_KEY). Secrets evaluated at deploy time, nothing in plaintext.7 Kamal’s 1Password integration requires the op CLI. Your secrets file becomes shell expressions that Kamal evaluates. Bitwarden and other managers are also supported.

Disk space. Docker images pile up. After a month I had 40GB of old images. Now I run kamal prune weekly.

No monitoring. When something broke, my only visibility was docker logs. Should have set up Prometheus + Grafana from day one.

No database backups. I was lucky. Now I have daily pg_dump to S3 and Hetzner volume snapshots. Don’t skip this.

What I like about this setup

I understand everything. I can SSH in and see what’s running. I can trace a request from DNS to container to application code. When something breaks, I know where to look.

This won’t scale to 100 servers. But I’m not running 100 servers. I’m running one server with a handful of apps, and it costs less than two lattes a month. For indie projects and side businesses, the economics are obvious.


Further Reading


Changelog

  • 2026-02-03: Initial draft