A while ago someone (not pointing fingers but it wasn’t me) broke the Raspberry Pi my Jellyfin server was running on. I have no idea what exactly happened and why it doesn’t work anymore but it was incentive enough to finally tackle the project of acquiring or building an actual home server to host my stuff on.

So far I’ve been making use of the two Raspberry Pis I own and they’ve been working mostly fine but every once in a while I came across some hiccups or ran into limitations of these devices (the Pi 4’s RAM being saturated for some reason) so I’ve been wanting to do this for a while. Also, it’s a good opportunity to separate concerns because the Pi that was running Jellyfin was also the driver for our digital frame which isn’t ideal.

I asked around some and wrote a mail to the guys from 2.5 Admins who promptly got back to me with a great hardware (and software) recommendation. If you’re interested in listening to the whole segment, here’s the link.

It boils down to this: the best bet is to buy or build a small-form-factor PC which has two or more drive bays, buy fitting SATA HDDs (at least two), run them in a RAID array for redundancy and store data on them. This has the added benefit that the OS can run on a separate drive which makes handling the actual data much less messy.

On the software side there are a couple of options like TrueNAS, XygmaNAS or going vanilla and using a plain Linux distro.

The Hardware

I went ahead and followed Jim’s recommendation of getting a refurbished HP Elitedesk G3 800 SFF (not to be confused with the the “mini” variant which otherwise has the same designation, confusingly) which has two 3.5’’ and one 2.5’’ drive bays. In addition I got two Seagate IronWolf Pro drives (which were cheaper than the regular IronWolfs for some reason) and a couple of screws and a SATA data cable to wire everything up.

Cost:

  • Elitedesk: 145.47 €
  • HDDs: 179.80 €
  • Screws and SATA cable: 16.57 €
  • Total: 341.84 €

Wiring everything up wasn’t difficult, I just needed to add the padding screws (or whatever these are called) to the sides of the drives and insert them and plug in the relevant cables. The data cables were a bit of a tight fit and I realized that getting cables with 90° connectors would have been better but I managed.

The Software

After some deliberation I decided to go for Ubuntu server. I just wanted something reasonably familiar because I wanted to be able to maintain and troubleshoot without having to learn a new OS. Whereas this meant doing the setup by hand, this is fine with me. This way I have the opportunity to learn a few things about admining and since I researched almost everything beforehand I didn’t lose much time on anything.

First things first: Storage

zfs setup

If you’ve listened to any episode of 2.5 Admins you’re likely to know that both Jim and Allan are big fans of zfs. After a while of listening to this I figured I should try running it in production. I won’t be able to make much use of the replication features but still.

Install zfs utilities:

sudo apt install zfsutils-linux

Determine IDs of HDDs to be used for storage:

ls -l /dev/disk/by-id

This is important because this way you won’t get screwed when your drives are assigned as different devices at boot or some such.

Create zfs pool:

sudo zpool create -o atime=12 $POOL mirror $ID1 $ID2

This creates a pool with two mirrored drives for redundancy. This means that one of them can die without my data being at risk. Of course the failed drive would then need to be replaced ASAP. The atime parameter is set at creation because changing it later resulted in error. By default the pool is mounted at /$POOL which is fine for me.

Set options according to this:

sudo zfs set <option>=<value> $POOL

Create datasets:

sudo zfs create $POOL/$DATASET_NAME

Create as many datasets as necessary. I did one each for documents and each media type (music, movies, series and so on). Change ownership so they can be written to by the user without root permissions:

sudo chown -R $USER:$USER /$POOL

Start putting files inside.

Linking to Nextcloud

I pay for a remote Nextcloud server at Hetzner which I think are fairly priced (no, I don’t get paid to write that). This is the place where backups go or rather with which I intend to sync the data on my home server. I found that using rclone works quite well for this.

Install rclone:

sudo apt install rclone

For configuration run the following and follow directions to webdav and nextcloud as a provider:

rclone config

Then you can sync files like (assuming the rclone remote was called “nextcloud”):

rclone sync nextcloud:<Path> /$POOL/$DATASET

This syncs files one way and also deletes files, if not present on the target. Bidirectional syncing is not yet available for the rclone version Ubuntu 22.04 provides and I’m not using the snap, it does weird things and seems to be unsupported by upstream. For the time being I’ll probably sync manually as I don’t anticipate frequent changes on either side and the syncing can take quite a while due to my very limited upload speed.

I also decided to not sync the movies and shows in my personal libary with the cloud because it would take ages to upload (the upload speed at my house is pretty bad) and also because I don’t want to completely fill it up with stuff I don’t need there. Instead, I’m keeping a copy of these video files on at least one of my external drives.

Backups

Relevant config files are backed up via restic which uses rclone as a backend.

sudo apt install restic

With this a remote repository can be initialized with:

restic init rclone:nextcloud:<path/to/directory>

Then backups can be created like this:

restic -r rclone:nextcloud:<path/to/directory> backup --files-from backup_paths.txt

The file backup_paths.txt contains a list of directories and files I intend to back up, mostly config files and things used by the services I host to be able to quickly restore them to a functioning state, if the necessity arises.

I inserted this into my crontab to run every night. Another important command is for pruning the backups:

restic -r rclone:nextcloud:Misc/Backup forget --keep-last 7

This way the remote Nextcloud instance won’t fill up with my backups. This too is run every night as a cronjob.

zfs Maintenance Setup

The zfs-utils package also includes maintenance tasks for trimming and scrubbing as cron jobs as detailed in /etc/cron.d/zfsutils-linux This will trim and scrub the system on the first and second Sunday of the month, respectively.

To setup mail notifications:

sudo apt install mailutils msmtp msmtp-mta s-nail

Create an msmtp config file at /etc/msmtprc:

defaults
auth           on
tls            on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile        ~/.msmtp.log


account alerts
host $SMTP SERVER
port 587
tls_starttls on
from $MAIL_ADDRESS
user $USERNAME
password $PASSWORD


account default : alerts

Configure the zfs event daemon zed in /etc/zfs/zed.d/zed.rc:

ZED_EMAIL_ADDR="$MAIL_ADDRESS"
...
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@ -r $MAIL_ADDRESS"
...
ZED_NOTIFY_VERBOSE=1

Substitute the relevant values into the variables.

Enable the zfs event daemon (zed): sudo systemctl enable --now zed

Now I receive an email notification every time a trim or srcub is run so I can see if everything is fine.

Services setup

I run my services in docker containers (sue me, if you want) with docker-compose. The setup is fairly simple:

  • Copy over the files from my other server and set them up with a directory each in $HOME.
  • Make necessary changes in docker-compose.yml files of each service, e.g. file locations, mounts etc.
  • Write systemd service file for each service and chuck it in /etc/systemd/system:
[Unit]
Description=$SERIVCE_NAME
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
WorkingDirectory=$PATH_TO_WORKING_DIR
ExecStart=/usr/bin/env /usr/bin/docker-compose -f $PATH_TO_DOCKER_COMPOSE_YML up -d
ExecStop=/usr/bin/env /usr/bin/docker-compose -f $PATH_TO_DOCKER_COMPOSE_YML stop
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
  • Install tailscale and caddy and set them up according to previous notes to get vaultwarden to work with an SSL cert
  • Start services

Network share

I also want my files to be available on my home network via network share. The best way (that I found) to do this in my case is NFS. I don’t have any Windows machines and don’t want to so I’m not going to bother with Samba.

Install nfs:

sudo apt install nfs-kernel-server

Specify shared folders in /etc/exports:

/data	*(rw,sync,no_subtree_check)
/data/pictures	*(rw,sync,no_subtree_check)
/data/documents	*(rw,sync,no_subtree_check)
/data/movies	*(rw,sync,no_subtree_check)
/data/series	*(rw,sync,no_subtree_check)
...

Note that the asterisk means that no restrictions are placed on the client so anyone within the network can access the mounts. Also, for some reason the subfolders of /data need to be specified as well, not sure why that is, though.

Start the server:

sudo systemctl enable --now nfs-server

If you changed the exports while the server was running, do: sudo exportfs -arv

Then on the client you can mount a folder shared via nfs. There are various ways of doing this, one way is adding an entry to /etc/fstab: server:/data /media/data nfs defaults,timeo=900,retrans=5,_netdev 0 0

Note that this makes use of tailscale’s MagicDNS feature which uses server as an identifier instead of an IP address. There are a gazillion mount options, I haven’t bothered to find out about most of them so choose these as you see fit.

That’s it! I’ve got a new (used) server with enough resources to run what I need, plenty of room for my stuff and a backup solution that’s not fully automated but enough for me for now.