This is a follow-up article for my previous one which describes how I migrated my personal Nextcloud to a docker setup. It’s part me telling a story of what I learned and part “How To” so someone can recreate my steps.

After I had managed that I figured I might as well go one or two steps further and host additional things on my home server. After all it has 4 GB of RAM, most of which is still unused. Also, if I only use these services myself, they should never generate enough load for it to become a problem. Or so I judged.

Before I describe any solutions for problems I would like to detail the setup I have in place so everyone’s on the same page.

Like I said, I have a Nextcloud running on a Raspberry Pi 4 in my home network. To be able to reach it from the internet I registered at a DynDNS service (which, interestingly, doesn’t accept registrations anymore), created a host (which was then assigned a domain) and pointed it at the IP address of my home network. Since this IP address is dynamical, it needs to be frequently updated. This is handled by my router which has built-in support for this.

With this setup in place, all http and https requests for the domain I got from my provider were sent to my router. Finally, I set up port forwarding so all requests were forwarded from my router to the IP of my Raspberry Pi running Nextcloud.

This works fine, so far so good.

Now, if you’ll remember that I want to run more than one service from my home network which should also be accessible from the internet, an issue naturally arises: I can only forward the same ports to one machine within my network and there only one service can listen to them. So my router (or something within my network) need a way to decide where to send incoming requests.

I’m sure there are quite a few ways to tackle this and from what I’ve learned so far it seems clear to me that, when it comes to networking, there are no standard solutions. There are so many different setups and ways to achieve something that it can become difficult to find relevant information for your particular question or problem because everyone’s tech stack differs ever so slightly (or completely), rendering the value of the information you found questionable.

I settled for the following approach: set up a web server as reverse proxy within my network. This will be forwarded all http(s) requests and in turn forward them to the appropriate service based on the domain used. This means, I had to:

  • Acquire a number of domains which point at my home network
  • Get SSL certificates for all domains
  • Set up the reverse proxy
  • Adjust port forwarding
  • Configure the proxy to send requests to the appropriate service (these need to exist of course)

Some of these turned out to be simple, others not so much. I racked my brain a while, trying to figure out a way to point different domains to my home network. Since I don’t have a static IP address, this seemed not as straightforward as I would have liked. I thought about getting several DynDNS hosts for this purpose but my router only supports automatic updating of the IP addresses for one. I read that it’s possible to configure more than one but this apparently necessitates editing its internal config files and I don’t feel comfortable doing that, if it can be avoided.

I also thought about creating more subdomains for the domain I already own (you’re enjoying the fruit of this labor right now, in fact) and setting up some DNS records to have all of them point at my DynDNS host which in turn points to my home IP. I’m not sure if this is possible, it seemed doubtful to me but I don’t understand DNS records well enough yet to be a good judge.

The solution came to me in the guise of an obscure forum post that I discovered after much DuckDuckGo-Fu (or was it Startpage?) which pointed out that the router I have actually supports setting up a domain pointing at its IP without any configuration. All I needed to do was register with my email account and activate the feature.

In fact, I had already done this long ago and I don’t even remember why. Anyway, with this I can create as many subdomains as I want at my domain registrar and set up a CNAME record pointing to the permanent url of my router.

Phew. One step down, many more to go!

It turns out, though, that the rest of my todo list was less difficult to achieve (for me at least). I discovered nginx proxy manager which is a docker container with the nginx web server and a fancy web interface on top of it. I know that I could have installed nginx on bare metal and configured it by hand which might even have saved me some hassle but I’m really living the docker vibes right now. Maybe that’ll pass, especially since I read that it’s apparently not under active development anymore and RedHat wants to replace it with podman. Oh well.

Since I already had a portainer container running to monitor and manage all my docker images and containers, deploying nginx proxy manager was easy enough. Its web interface is also pretty simple (that’s the point really) so I could whip up a few proxy hosts in no time.

However, there are a few details that need taking care of. It took me a while to figure these out so I want to go over them in a bit more detail.

  • Each subdomain I created is supposed to point to a service I want to expose via internet. Currently that’s four.
  • Since nginx will take care of forwarding requests coming in from the internet, it needs to be the recipient of port forwarding from the router. It also needs to listen on the relevant ports, i.e. 80 and 443 for http/https. In case of nginx proxy manager, port 81 also needs to be exposed because its web UI is accessible there.
  • All the services you want to publicly expose shouldn’t expose any ports. Nginx will take care of this.
  • Since nginx runs in a separate container, it is necessary to create a dedicated docker network for it. This is an attachable bridge network to which all of the containers that are to be proxied to need to be added. E.g. I have a network called proxynet. On deployment of my NextcloudPi container I added it to this network and gave it the name and hostname nextcloudpi.
  • All subdomains need valid SSL certificates. The easiest way to provide these is with the built-in functionality of nginx. The web UI makes this easy but for this to work the domains need to already point to the nginx service
  • Finally, when setting up the proxy hosts with nginx in the web UI, you need to choose the scheme (http/https), the IP or hostname and the port. Depending on the service, this can be, e.g., http vaultwarden 80. If you defined a hostname for the container within the dedicated network, you can use this for proxying which is more readable and maintainable than using IP addresses. In some cases you need to proxy to a different port, usually it’s just the same port you would otherwise expose for the container for regular deployment. There was one gotcha for the NextcloudPi container: it’s necessary to choose the https scheme as well as port 443 for proxying, otherwise it won’t work. I actually don’t know why that is, possibly because NextcloudPi takes care of its SSL certificate by itself and will only allow https connections in the first place.

With all that in mind the setup is actually relatively easy and I currently have publicly exposed services for Nextcloud, Vaultwarden, flame and uptime-kuma. I think that’s pretty cool. For some reason the latter didn’t accept being proxied without complaining, I had to add two lines to its nginx config file manually, but no matter. Just because I like to use fancy web UIs it’s not like I don’t know how to use the command line anymore.

Now then. After spending a lot of time figuring things out, I now have a working setup of my personal cloud and a few additional things, mostly to play with them. I’m happy with the results, particularly because this setup gives me reasonable peace of mind that I can add or remove containers at will, tear them down, restart them, and none of that will wreak havoc on the underlying operating system. I like this layer of abstraction.

Although most of what I wrote here is somewhat vague I hope it is still helpful to someone in need or myself at a later point in time or both. Have a good one!