Now that I can build my static site and serve it up using Caddy inside a docker container, it's time to publish it to a server on the internet. If I didn't want to host other things with it, at this point, I'd just stop and throw it on a static host somewhere, either like github pages, gitlab pages or sourcehut pages. Or a shared hosting space at nearlyfreespeech.net. But I want to publish some larger file downloads also, and those are a poor fit for the software forges. They'd be a fine fit for nearlyfreespeech.net, but would increase my storage and transfer bill there while I'm under-using my VPS with plenty of space and transfer capacity. Paying to move the downloads to NFSN wouldn't let me reduce the VPS size or decommission it, so using the VPS feels like a better option. Here's how I'm using kamal to do that.

If you haven’t already, read part 1 first.

Another option might be to use S3 (or an S3-alike) for these downloads. While that would make sense for me in some situations, for the things I’m putting on this VPS, predictable costs are important to me. Hosting a few large files doesn’t change the cost of my VPS, but if I see a sudden burst of activity, S3 costs can quickly go higher than I’d consider worthwhile for the things I’m sharing here. If I used half of my VPS' storage and all of its transfer in a month on S3, the $15-ish VPS bill would become a $310 S3 bill.

Container Registries

Kamal uses container images to publish web applications. In order to make that work easily, it needs some sort of repository. Dockerhub only lets you have one private repo on their free plan, and the next tier up is $11/mo. Github is more generous with their free tier. Kamal also offers the ability to have a local repo and use an ssh tunnel to your server to publish that. I haven’t tried that, but if I were starting fresh now, that’s how I’d start out. Since I’ve already set up an AWS ECR account, which at last check is costing me less than $0.10/mo, I’m using ECR with Kamal. The excellent SaaS Pegasus documentation for using ECR with Kamal has more details about how to set it up. But with Kamal’s local registry support, that should be optional for deployments now.

Configuring Kamal

If you haven’t installed kamal on your development machine yet, follow the instructions on kamal-deploy.org first. Then run kamal init from the root of your project to generate a skeletal configuration file.

Editing config/deploy.yml

First, name your service and image. e.g.:

service: www-example-dev

image: example/www

Immediately below that, because I’m using ansible as a single source of truth for my VPS configurations, I add a bit of embedded ruby to the configuration file to pull that in:

<%
  require 'dotenv/load'
  puts "site=#{ENV['SITE_NAME']}"

  def get_site_info(site_name = nil)
    require 'yaml'
    site_name ||= ENV['SITE_NAME']
    hosts = YAML.load_file('../5tools_inventories/hosts-web.yml')
    hosts['webservers']['vars']['sites'].find { |s| s['name'] == site_name }
  end
%>

This requires a path to the inventory file, which is always checked out as a peer to my project. It also requires SITE_NAME to be set in the environment. I do that by adding SITE_NAME = www.example.com to the bottom of my .env file; this needs to correspond to one of the entries in hosts.yml, which I describe here. For this static site, here’s the entry I use:

webservers:
  vars:
    sites:
      - name: www.example.com
        user: caddy
        uid: 4201
        group: site
        gid: 4201
        data_directories:
          - path: files
    ansible_become_method: su
    ansible_user: root
  hosts:
    vps.example.com:

Prior to describing the rest of the deployment for kamal, one more bit of ERB sets up the volume mapping for the file share area so the same caddy instance that serves my site can serve up downloads:

volumes:
  - "<%= site = get_site_info; "/srv/#{site['name']}/files:/usr/share/caddy/files" %>"

In the servers dict, I configure the user option so that the container will be able to access the shared files on the host filesystem:

servers:
  web:
    hosts:
      - vps.example.com
    options:
        user: "<%= site = get_site_info; "#{site['user']}:#{site['group']}" %>"

I run kamal as a non-root ssh user, because that’s how I prefer to run things:

ssh:
   user: kamal

And then I configure the builder to pass the parameters to the docker build process:

builder:
  arch: amd64
  dockerfile: "Dockerfile"
  args:
    SITE_UID: "<%= get_site_info['uid'] %>"
    SITE_GID: "<%= get_site_info['gid'] %>"
    SITE_USER: "<%= get_site_info['user'] %>"
    SITE_GROUP: "<%= get_site_info['group'] %>"

The rest of the configuration is just the boilerplate from kamal init with my account details and domain filled in per the kamal documentation:

registry:
  server: <fill in AWS ECR server name here>
  username: AWS
  password: <%= %x(aws --profile example-ecr-publish-AWSID ecr get-login-password) %>

proxy:
  ssl: true
  host: www.example.com
  app_port: 80

The first time I need to deploy the site, I run kamal setup followed by kamal deploy. Thereafter, kamal deploy is all that’s needed.

While this is more initial ceremony than setting up static hosting on a forge or shared host, I’m happy with the result. Especially in the context of hosting other web apps on the server along with this site.

Here are full examples of the files mentioned in these two posts: