For a few years now, I’ve hosted a small server at home. Nothing too exciting, I’d say. It all started with a Nextclound instance (see here) and sort of evolved from there. After a while I discovered docker and the amount of services saw a dramatic, albeit short-lived uptick. I tried out all kinds of things, like uptime-kuma, FreshRSS, miniflux, portainer, watchtower. At some point I came across what used to be bitwarden_rs and is now called vaultwarden, a self-hostable reimplementation of Bitwarden. I had been using keepassXC and had heard good things about Bitwarden in general so I decided to try it out and liked it so much that I ended up using it instead and depended on it for daily use.

Everything was fine until, after a while, it wasn’t. I had been using the nextcloudpi docker image but updated it from within the container. I had been keeping backups but eventually configuration drift caught up with me and the instance broke after an update.

I might have been able to fix things and get the instance up and running again but the incident got me thinking. When I started up the server it was a fun project for personal use. But now part of my family relies on it for backing stuff up and storing images and the like and so do I. More importantly, I don’t have much time for maintenance anymore so the broken Nextcloud stayed that way for a few weeks until I finally got around to taking care of it.

In the meantime I thought hard about what course of action to take, trying to factor in all the relevant parameters for the decision. I did some research and then, when I finally had time, I acted. I found out that Hetzner offers managed Nextcloud instances including, among other things, automated backups and rollbacks at a reasonable price. You get an admin account and can manage everything about the instance yourself which, I think, is a great compromise for me. Over the course of a few days I extracted all the data from my own instance and uploaded everything to the newly created users in the cloud. It worked fine and as a result the instance is much more responsive. Which is hardly surprising, given that I’d run everything on a Raspberry Pi on my home network with questionable upload speed.

The remaining question was what to do with my remaining services. There were two that I actually cared about: FreshRSS and vaultwarden with the latter obviously being the most critical, the rest I abandoned. I had neglected backing up both (which is a very bad idea!) so I resolved to remedy that.

I found an interesting Python tool called autocompose which is able to extract docker parameters into a docker-compose file. When I started using containers I did everything from the command line or with the portainer web interface but in the sense of repeatability and backing everything up I wanted to use docker-compose from then on. After a few hiccups that worked fine. My strategy was thus to zip up the data mounts of my containers, the docker-compose yaml files and, last but not least, my backup scripts themselves, and upload everything to my Nextcloud instance (encrypted, of course).

For this backup solution I looked at several tools to automate this. There’s so many of them! Most of them seemed overly complex or a bit weird to install for my needs and I wanted something that required the least amount of maintenance possible. I’m sure I could have gone with things like restic, or borg and whatever else they’re all called but I ended up rolling my own solution. It’s just a small shell script that zips up all files and folders I need backed up, encrypts them with a key I keep elsewhere and sends a POST request via curl to my Nextcloud server to upload the archive. The tricky bit was taking care of archive depth since I don’t want the backups to clutter up my storage over time. At first I tried implementing this in an elaborate and overly complicated Python script but after a while I realized I could greatly simplify this by just creating separate cron jobs that run the script daily, weekly, monthly etc. and be done with it. The remaining challenge was to write a small script that deletes backups after the required amount of backups to be kept is reached. Nothing a bit of Python can’t fix and it didn’t take me too long to do it. I’m quite happy with the result, especially since it’s very simple and should thus be easy enough to maintain without much effort on my part.

The next thing I wanted to change was the accessibility of my services. Although I was the only ones using them, they had been exposed to the internet in order for me to be able to use them while not at home. This is certainly convenient but probably not a good idea as they both expose login pages to the public. The obvious solution to this is a VPN setup. I had tried something like this in the past but it’s always a bit fiddly and didn’t strike me as terribly robust. A friendly fellow Fosstodonian recommended Tailscale to me as an easy-to-use service using VPN mesh networks between devices built on wireguard. I gave it a go and what can I say? It’s amazing! The free tier includes up to 20 devices which is plenty for my use case. The software is available for practically every platform (even on F-Droid) and all you have to do is make an account, install and activate. You can then refer to your devices by their hostname (which I love) in your browser or via SSH and can have access to all of them from anywhere without exposing any ports to the public. You can even get a free SSL certificate for your Tailnet. For this to work I set up Caddy as a reverse proxy which points to my vaultwarden container. The config for that is just this:

<hostname>.<tailnet>.<TLD> {
	reverse_proxy localhost:<port>
}

If you’re running caddy as a non-root user (as is the case for Debian and derivatives), you also need to set the UID of the user in /etc/default/tailscaled:

TS_PERMIT_CERT_UID=caddy	

That’s all. The only downside to this is that I can get only one certificate for one of my services (or more precisely: only one per machine). FreshRSS runs on HTTP which my browser always complains about but it’s not a big deal. I suppose I could hook it up to the domain I own, create several subdomains and point them at the services I want but so far I haven’t bothered. I might get to that at some point.

I’m quite happy with this new solution for keeping my data. It gives me reasonable peace of mind that my stuff is safe and I think I reduced the maintenance burden to a manageable amount. And if something does go wrong with my server, I can always nuke it and start from scratch in a couple of minutes. After all, that’s what keeping backups is for, no?