Over the past several months, I've opened up to the idea of using ansible and kamal to manage VPSes where I host sites that I want to make accessible to the public. Now, I'm setting up a new service that includes a basic static site, where I need a way to easily share files with people. For other similar static sites, I've been using a shared host or my git forge's "pages" service. But these files are not source code and are larger than I want to put on the git forge, and my VPS has plenty of space; I'd rather not pay for the space for them on my shared host.

So since I'm already using kamal on the VPS I'm already paying for, I've decided to host a static site in what might be the silliest way possible.

Side Note: After coming back from winter holidays this year, I’ve decided that I’m going to try to write more. This is the first post in my effort to write at least one per week. I’m hoping this (more modest than usual) goal is one I’m more likely to stick with in my annual-ish effort to blog more.

Kamal is a relatively simple orchestrator that was released by 37signals when they decided to transition away from most of their “cloud” hosting. While it was developed for Ruby on Rails things, it can host any site you care to dockerize.

Ansible is a FOSS automation gadget for configuring servers. It’s currently owned/shepherded by Red Hat. It’s been around for a while, and I waited too long to learn about it, despite it being common in the circles around me. As I was using VPSes more to publish demos that I wanted others to access, I noticed two things that finally motivated me to learn it:

  1. As soon as I started my reverse proxy (Caddy) it would, quite sensibly, use ACME to get a TLS certificate from LetsEncrypt for any name it was configured to front. This would, again, quite sensibly, generate Certificate Transparency Log entries for any certificates that got issued. Very quickly after that, I’d start seeing scans of my heretofore quiet new VPS. Given my tech choices and relatively minimal installations, these log entries were merely annoying, but they made me want to make sure I had some basic user/group and firewall hygiene in place from the very beginning.

  2. My initial setup checklist was growing and getting annoying to manually apply, plus I was missing steps, and generally committing errors that made standing up a new app/demo more fiddly than fun.

So I wanted to automate things, and despite my disdain for yaml, Ansible’s “write root’s authorized_keys and go” approach (no agent, no real other ceremony after installing the OS on the VPS) was my favorite after looking around the common automation approaches. And I need to just accept yaml as a fact of my enviornment at this point. I can moan about it, but I can’t avoid it.

So I tried out several sets of Ansible roles, and settled on an approach that I like. I’ve shared that here, but I suspect I’m the only one using it right now. This reduces my VPS setup to: configure a new Ubuntu LTS VPS with my authorized key, put that VPS into the appropriate inventory file, and run the playbook. Then it’s got docker and is ready for kamal. (Note: kamal can configure docker, but it will not configure the other things I like to do on the server, like admin users, firewall, shared directories, update policies, etc.)

Finally, it’s worth noting that for my static site, I’m using tailwind and usually alpine. And I don’t like public CDNs, so I’ve earned myself a build step. I’m using npm and vite for that build step.

With that context established, here’s the world’s silliest static site setup.

Site Content

My site consists of 2 top-level content items:

  • public contains things that don’t need a build step. Images, favicons, etc. will land there. vite/rollup will merge this with the built artifacts when it runs build and will just serve them up in dev mode.
  • src contains files that need to be processed before they can be deploted. HTML that needs to be processed by tailwind, css or javascript that need to be processed by tailwind or rollup, etc. The pixelcave starter for using vite with tailkit is a very good example of what I’m doing. (And I’m a happy pixelcave customer, for their paid components.)

And then there’s the package.json and vite.config.js that npm and vite use to build the site.

I’m willing to put all of this into my container image. The larger files that I want to share on the site don’t belong either in that image or in source control. More on that later.

With this, I have everything I need for local development. To develop the site with a nice, live-reload local server, all I need is:

# optional, but recommended if you're using nvm
nvm use lts
npm install
npm run dev

If all you wanted was a tree full of files to dump on a shared host, npm run build would produce a dist directory suitable for that, and you’re done. Wanting to keep this with some related applications, and have a set of shared files that don’t go onto the shared host left me with a few more things to work out.

Turning it into a Container

Kamal deals in containers. And a container is a way of packaging a program to do a thing. Caddy is my first resort for serving things up on the web. And while, in the simplest cases, it’s just a single binary anyway, so you might not be tempted to bother with a container, that’s not going to work on a system where kamal wants to be the reverse proxy. So we need a way to get Caddy and all of the built artifacts from the site content into a container together. While writing a Dockerfile to do exactly that is reasonably straightforward, it opens you up to a build that’s tied to a particular npm installation, or to an unreasonably large container that includes all of the node and npm and vite and tailwind pieces that you only need at build time.

The right way to address this is to write a Dockerfile that builds in multiple stages:

# Stage 1: install vite and build static site
FROM node:lts-bookworm-slim AS build-site

RUN npm install -g npm@11.5

RUN mkdir /code
RUN chown node:node /code
WORKDIR /code

USER node
COPY package.json package.json
RUN npm install

COPY public /code/public
COPY src /code/src
COPY vite.config.js /code/

RUN npm run build

# stage 2: build image to serve static files
FROM caddy:2-alpine

COPY Caddyfile /etc/caddy/Caddyfile
COPY --from=build-site /code/dist /usr/share/caddy

This automates running the npm build script, then copies the result into a very minimal container that contains the caddy binary. I also like to check in the Caddy configuration to Caddyfile and explicitly copy it in:

:80 {
  root * /usr/share/caddy
  file_server
}

(That’s the current default for the upstream caddy image, but I prefer to be explicit here.)

With this in place, you can test out your site by running:

docker build -t my_site_tag_name .
docker run -p 9999:80 my_site_tag_name

and visiting http://localhost:9999/ in your favorite browser.

Once this is working, it’s time to add Kamal to the mix, and use that to deploy your site. But since that will require some explanation of container registries, I’m moving that to its own post. I’ll link part 2 below, when it’s ready.