My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.
I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:
- Safe, convenient archiving for big files.
- Instant sync between devices for stuff I’m working on.
- Access over LAN from home, and over the Internet from anywhere else.
- Regular, encrypted offsite backups.
- Compact, low power hardware that I can stick in a closet and forget about.
- Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.
The hardware
I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around with now and then.
In the end, I picked out an Intel NUC, and I threw in an old SSD and a stick of RAM:
It’s tiny, it’s quiet, and it looks okay too! (Just find somewhere to hide the power brick). My only real complaint is the wifi hardware doesn’t work with older Linux kernels, but that wasn’t a big deal for my needs and I’m sure it will work in the future.
The software
I installed Ubuntu Core 16, which is delightful. Installing it is a bit surprising for the uninitiated because there isn’t really an install process: you just clone the image to the drive you want to boot from and you’re done. It’s easier if you do this while the drive is connected to another computer. (I didn’t feel like switching around SATA cables in my desktop, so I needed to write a different OS to a flash drive, boot from that on the NUC, transfer the Ubuntu Core image to there, then dd that image to the SSD. Kind of weird for this use case).
Now that I figured out how to run it, I’ve been enjoying how this system is designed to minimize the time you need to spend with your device connected to a screen and keyboard like some kind of savage. There’s a simple setup process (configure networking, log in to your Ubuntu One account), and that’s it. You can bury the thing somewhere and SSH to it from now on. In fact, you’re pretty much forced to: you don’t even get a login prompt. Chances are you won’t need to SSH to the system anyway since it keeps itself up to date. As someone who obsesses over loose threads, I’m finding this all very satisfying.
Although, with that in mind, one important thing: if you haven’t played with Ubuntu for a while, head over to login.ubuntu.com and make sure your SSH keys are up to date. The first time I set it up, I realized I had a bunch of obsolete SSH keys in my account and I had no way to reach the system from the laptop I was using. Fixing that meant changing Ubuntu Core’s writable files from another operating system. (I would love to know if there is a better way).
The other software
Okay, using Ubuntu Core is probably a bit weird when I want to run all these servers and I’m probably a little picky, but it’s so elegant! And, happily, there are Snap packages for both Nextcloud and Syncthing. I ended up using both.
I really like how files you can edit are tucked away in /writable. For this guide, I always refer to things by their full paths under /writable. I found thinking like that spared me getting lost in files that I couldn’t change, and it helped to emphasize the nature of this system.
DNS
Before I get to the fun stuff, there were some networking conundrums I needed to solve.
First, public DNS. My router has some buttons if you want to use a dynamic DNS service, but I just rolled my own thing. To start off, I added some additional records for my DNS pointing at my home IP address. My web host has an API for editing DNS rules, so I set up a dynamic DNS after everything else was working, but I will get to that further along.
Next, my router didn’t support hairpinning (or NAT Loopback), so requests to core.example.com were still resolving to my public IP address, which means way too many hops for sending data around. My ridiculous solution: I’ll run my own DNS server, darnit.
To get started, check the network configuration in /writable/system-data/etc/netplan/00-snapd-config.yaml. You’ll want to make sure the system requests a static IP address (I used 192.168.1.2) and uses its own nameservers. Mine looks like this:
network: ethernets: eth0: dhcp4: false dhcp6: false addresses: [192.168.1.2/24, '2001:1::2/64'] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8, 8.8.4.4] version: 2
After changing the Netplan configuration, use sudo netplan generate
to update the system.
For the actual DNS server, we can install an unofficial snap that provides dnsmasq:
$ snap install dnsmasq-escoand
You’ll want to edit /writable/system-data/etc/hosts so the service’s domains resolve to the devices’s local IP address:
127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.2 core.example.com fe80::96c6:91ff:fe1a:6581 core.example.com …
Now it’s safe to go into your router’s configuration, reserve an IP address for this device, and set it as your DNS server:
And that solved it.
To check, run tracepath from another computer on your network and the result should be something simple like this:
$ tracepath core.example.com 1?: [LOCALHOST] pmtu 1500 1: core.example.com 0.789ms reached 1: core.example.com 0.816ms reached Resume: pmtu 1500 hops 1 back 1
While you’re looking at the router, you may as well forward some ports, too. By default you need TCP ports 80 and 443 for Nextcloud, and 22000 for Syncthing.
Nextcloud
The Nextcloud snap is fantastic. It already works out of the box: it adds a system service for its copy of Apache on port 80, and it comes with a bunch of scripts for setting up common things like SSL certificates. I wanted to use an external hard drive for its data store, so I needed to configure the mount point for that and grant the necessary permissions for the snap to access removable media.
Let’s set up that mount point first. These are configured with Systemd mount units, so we’ll want to create a file like /writable/system-data/etc/systemd/system/media-data1.mount. You need to tell it how to identify the storage device. (I always give them nice volume labels when I format them so it’s easy to use that). Note that the name of the unit file must correspond to the full name of the mount point:
[Unit] Description=Mount unit for data1 [Mount] What=/dev/disk/by-label/data1 Where=/media/data1 Type=ext4 [Install] WantedBy=multi-user.target
One super cool thing here is you can start and stop the mount unit just like any other system service:
$ sudo systemctl daemon-reload $ sudo systemctl start media-data1.mount $ sudo systemctl enable media-data1.mount
Now let’s set up Nextcloud. The code repository for the Nextcloud snap has lots of documentation if you need.
$ snap install nextcloud $ snap connect nextcloud:removable-media :removable-media $ sudo snap run nextcloud.manual-install USERNAME PASSWORD $ snap stop nextcloud
Before we do anything else we need to tell Nextcloud to store its data in /media/data1/nextcloud/, and allow access through the public domain from earlier. To do that, edit /writable/system-data/var/snap/nextcloud/current/nextcloud/config/config.php:
<?php $CONFIG = array ( 'apps_paths' => array ( … ), … 'trusted_domains' => array ( 0 => 'localhost', 1 => 'core.example.com' ), 'datadirectory' => '/media/data1/nextcloud/data', … );
Move the existing data directory to the new location, and restart the service:
$ snap stop nextcloud $ sudo mkdir /media/data1/nextcloud $ sudo mv /writable/system-data/var/snap/nextcloud/common/nextcloud/data /media/data1/nextcloud/ $ snap start nextcloud
Now you can enable HTTPS. There is a lets-encrypt option (for letsencrypt.org), which is very convenient:
$ sudo snap run nextcloud.enable-https lets-encrypt -d $ sudo snap run nextcloud.enable-https lets-encrypt
At this point you should be able to reach Nextcloud from another computer on your network, or remotely, using the same domain.
Syncthing
If you aren’t me, you can probably stop here and use Nextcloud, but I decided Nextcloud wasn’t quite right for all of my files, so I added Syncthing to the mix. It’s like a peer to peer Dropbox, with a somewhat more geeky interface. You can link your devices by globally unique IDs, and they’ll find the best way to connect to each other and automatically sync files between your shared folders. It’s very elegant, but I wasn’t sure about using it without some kind of central repository. This way my systems will sync between each other when they can, but there’s one central device that is always there, ready to send or receive the newest versions of everything.
Syncthing has a snap, but it is a bit different from Nextcloud, so the package needed a few extra steps. Syncthing, like Dropbox, runs one instance for each user, instead of a monolithic service that serves many users. So, it doesn’t install a system service of its own, and we’ll need to figure that out. First, let’s install the package:
$ snap install syncthing $ snap connect syncthing:home :home $ snap run syncthing
Once you’re satisfied, you can stop syncthing. That isn’t very useful yet, but we needed to run it once to create a configuration file.
So, first, we need to give syncthing a place to put its data, replacing “USERNAME” with your system username:
$ sudo mkdir /media/data1/syncthing $ sudo chown USERNAME:USERNAME /media/data1/syncthing
Unfortunately, you’ll find that the syncthing application doesn’t have access to /media/data1, and its snap doesn’t support the removable-media interface, so it’s limited to your home folder. But that’s okay, we can solve this by creating a bind mount. Let’s create a mount unit in /writable/system-data/etc/systemd/system/home-USERNAME-syncthing.mount:
[Unit] Description=Mount unit for USERNAME-syncthing [Mount] What=/media/data1/syncthing/USERNAME Where=/home/USERNAME/syncthing Type=none Options=bind [Install] WantedBy=multi-user.target
(If you’re wondering, yes, systemd figures out that it needs to mount media-data1 before it can create this bind mount, so don’t worry about that).
$ sudo systemctl daemon-reload $ sudo systemctl start home-USERNAME-syncthing.mount $ sudo systemctl enable home-USERNAME-syncthing.mount
Now update Syncthing’s configuration and tell it to put all of its shared folders in that directory. Open /home/USERNAME/snap/syncthing/common/syncthing/config.xml in your favourite editor, and make sure you have something like this:
<configuration version="27"> <folder id="default" label="Default Folder" path="/home/USERNAME/syncthing/Sync" type="readwrite" rescanIntervalS="60" fsWatcherEnabled="false" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"> … </folder> <device id="…" name="core.example.com" compression="metadata" introducer="false" skipIntroductionRemovals="false" introducedBy=""> <address>dynamic</address> <paused>false</paused> <autoAcceptFolders>false</autoAcceptFolders> </device> <gui enabled="true" tls="false" debugging="false"> <address>192.168.1.2:8384</address> … </gui> <options> <defaultFolderPath>/home/USERNAME/syncthing</defaultFolderPath> </options> </configuration>
With those changes, Syncthing will create new folders inside /home/USERNAME/syncthing, you can move the default “Sync” folder there as well, and its web interface will be accessible over your local network at http://192.168.1.2:8384. (I’m not enabling TLS here, for two reasons: it’s just the local network, and Nextcloud enables HSTS for the core.example.com domain, so things get confusing when you try to access it like that).
You can try snap run syncthing
again, just to be sure.
Now we need to add a service file so Syncthing runs automatically. We could create a service that has the User field filled in and it always runs as a certain user, but for this type of service it doesn’t hurt to set it up as a template unit. Happily, Syncthing’s documentation provides a unit file we can borrow, so we don’t need to do much thinking here. You’ll need to create a file called /writable/system-data/etc/systemd/system/syncthing@.service:
[Unit] Description=Syncthing - Open Source Continuous File Synchronization for %I Documentation=man:syncthing(1) After=network.target [Service] User=%i ExecStart=/usr/bin/snap run syncthing -no-browser -logflags=0 Restart=on-failure SuccessExitStatus=3 4 RestartForceExitStatus=3 4 [Install] WantedBy=multi-user.target
Note that our Exec line is a little different than theirs, since we need it to run syncthing under the snap program.
$ sudo systemctl daemon-reload $ sudo systemctl start syncthing@USERNAME.service $ sudo systemctl enable syncthing@USERNAME.service
And there you have it, we have Syncthing! The web interface for the Ubuntu Core system is only accessible over your local network, but assuming you forwarded port 22000 on your router earlier, you should be able to sync with it from anywhere.
If you install the Syncthing desktop client (snap install syncthing
in Ubuntu, dnf install syncthing-gtk
in Fedora), you’ll be able to connect your other devices to each other. On each device that you connect to this one, make sure you set core.example.com as an Introducer. That way they will discover each other through it, which saves a bit of time.
Once your devices are all connected, it’s a good idea to go to Syncthing’s web interface at http://192.168.1.2:8384 and edit the settings for each device. You can enable “Auto Accept” so whenever a device shares a new folder with core.example.com, it will be accepted automatically.
Nextcloud + Syncthing
There is one last thing I did here. Syncthing and Nextcloud have some overlap, but I found myself using them for pretty different sorts of tasks. I use Nextcloud for media files and archives that I want to store on a single big hard drive, and occasionally stream over the network; and I use Syncthing for files that I want to have locally on every device.
Still, it would be nice if I could have Nextcloud’s web UI and sharing options with Syncthing’s files. In theory we could bind mount Syncthing’s data directory into Nextcloud’s data directory, but the Nextcloud and Syncthing services run as different users. So, that probably won’t go particularly well.
Instead, it works quite well to mount Syncthing’s data directory using SSH.
First, in Nextcloud, go to the Apps section and enable the “External storage support” app.
Now you need to go to Admin, and “External storages”, and allow users to mount external storage.
Finally, go to your Personal settings, choose “External storages”, add a folder named Syncthing, and tell it connect over SFTP. Give it the hostname of the system that has Syncthing (so, core.example.com), the username for the user that is running Syncthing (USERNAME), and the path to Syncthing’s data files (/home/USERNAME/syncthing). It will need an SSH key pair to authenticate.
When you click Generate keys it will create a key pair. You will need to copy and paste the public key (which appears in the text field) to /home/USERNAME/.ssh/authorized_keys.
If you try the gear icon to the right, you’ll find an option to enable sharing for the external storage, which is very useful here. Now you can use Nextcloud to view, share, or edit your files from Syncthing.
Backups
I spun tires for a while with backups, but eventually I settled on Restic. It is fast, efficient, and encrypted. I’m really impressed with it.
Unfortunately, the snap for Restic doesn’t support strict confinement, which means it won’t work on Ubuntu Core. So I cheated. Let’s set this up under the root user.
You can find releases of Restic as prebuilt binaries. We’ll also need to install a snap that includes curl. (Or you can download the file on another system and transfer it with scp, but this blog post is too long already).
$ snap install demo-curl $ snap run demo-curl.curl -L "https://github.com/restic/restic/releases/download/v0.8.3/restic_0.8.3_linux_amd64.bz2" | bunzip2 > restic $ chmod +x restic $ sudo mkdir /root/bin $ sudo cp restic /root/bin
We need to figure out the environment variables we want for Restic. That depends on what kind of storage service you’re using. I created a file with those variables at /root/restic-MYACCOUNT.env. For Backblaze B2, mine looked like this:
#!/bin/sh export RESTIC_REPOSITORY="b2:core-example-com--1" export B2_ACCOUNT_ID="…" export B2_ACCOUNT_KEY="…" export RESTIC_PASSWORD="…"
Next, make a list of the files you’d like to back up in /root/backup-files.txt:
/media/data1/nextcloud/data/USERNAME/files /media/data1/syncthing/USERNAME /writable/system-data/
I added a couple of quick little helper scripts to handle the most common things you’ll be doing with Restic:
/root/bin/restic-MYACCOUNT.sh
#!/bin/sh . /root/restic-MYACCOUNT.env /root/bin/restic $@
Use this as a shortcut to run restic with the correct environment variables.
/root/bin/backups-push.sh
#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} backup --files-from /root/backup-files.txt --exclude ".stversions" --exclude-if-present ".backup-ignore" --exclude-caches
This will ignore any directory that contains a file named “.backup-ignore”. (So to stop a directory from being backed up, you can run touch /path/to/the/directory/.backup-ignore
). This is a great way to save time if you have some big directories that don’t really need to be backed up, like a directory full of, um, Linux ISOs shifty eyes.
/root/bin/backups-clean.sh
#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} forget --keep-daily 7 --keep-weekly 8 --keep-monthly 12 --prune ${RESTIC} ${RESTIC_ARGS} check
This will periodically remove old snapshots, prune unused blocks, and then check for errors.
Make sure all of those scripts are executable:
$ sudo chmod +x /root/bin/restic-MYACCOUNT.sh $ sudo chmod +x /root/bin/restic-push.sh $ sudo chmod +x /root/bin/restic-clean.sh
We still need to add systemd stuff, but let’s try this thing first!
$ sudo /root/bin/restic-MYACCOUNT.sh init $ sudo /root/bin/backups-push.sh $ sudo /root/bin/restic-MYACCOUNT.sh snapshots
Have fun playing with Restic, try restoring some files, note that you can list all the files in a snapshot and restore specific ones. It’s a really nice little backup tool.
It’s pretty easy to get systemd helping here as well. First let’s add our service file. This is a different kind of system service because it isn’t a daemon. Instead, it is a oneshot service. We’ll save it as /writable/system-data/etc/systemd/system/backups-task.service.
[Unit] Description=Regular system backups with Restic [Service] Type=oneshot ExecStart=/bin/sh /root/bin/backups-push.sh ExecStart=/bin/sh /root/bin/backups-clean.sh
Now we need to schedule it to run on a regular basis. Let’s create a systemd timer unit for that: /writable/system-data/etc/systemd/system/backups-task.timer.
[Unit] Description=Run backups-task daily [Timer] OnCalendar=09:00 UTC Persistent=true [Install] WantedBy=timers.target
One gotcha to notice here: with newer versions of systemd, you can use time zones like PDT or America/Vancouver for the OnCalendar entry, and you can test how that will work using systemd-analyze, like systemd-analyze calendar "09:00 America/Vancouver"
. Alas, that is not the case in Ubuntu Core 16, so you’ll probably have the best luck using UTC and calculating timezones yourself.
Now that you have your timer and your service, you can test the service by starting it:
$ sudo systemctl start backups-task.service $ sudo systemctl status backups-task.service
If all goes well, enable the timer:
$ sudo systemctl start backups-task.timer $ sudo systemctl enable backups-task-timer
To see your timer, you can use systemctl list-timers:
$ sudo systemctl list-timers … Sat 2018-04-28 09:00:00 UTC 3h 30min left Fri 2018-04-27 09:00:36 UTC 20h ago backups-task.timer backups-task.service …
Some notes on security
Some people (understandably) dislike running this kind of web service on port 80. Nextcloud’s Apache instance runs on port 80 and port 443 by default, but you can change that using snap set nextcloud ports.http=80 ports.https=443
. However, you may need to generate a self-signed SSL certificate in that case.
Nextcloud (like any daemon installed by Snappy) runs as root, but, as a snap, it is confined to a subset of the system. There is some official documentation about security and sandboxing in Ubuntu Core if you are interested. You can always run sudo snap run --shell nextcloud.occ
to get an idea of what it has access to.
If you feel paranoid about how we gave Nextcloud access to all removable media, you can create a bind mount from /writable/system-data/var/snap/nextcloud/common/nextcloud to /media/data1/nextcloud, like we did for Syncthing, and snap disconnect nextcloud:removable-media
. Now it only has access to those files on the other end of the bind mount.
Conclusion
So that’s everything!
This definitely isn’t a tiny amount of setup. It took an afternoon. (And it’ll probably take two or three years to pay for itself). But I’m impressed by how smoothly it all went, and with a few exceptions where I was nudged into loopy workarounds, it feels simple and reproducible. If you’re looking at hosting more of your own files, I would happily recommend something like this.
Just curious – why bother with syncthing? Doesn’t the Nc client sync files just fine, and give you the ability to create public links, share with other users on the NC server and track file activity, get notified when you get files shared with you or receive a call and so on?
It definitely isn’t strictly needed :) I found that Syncthing seemed to perform better for lots of files and I liked the idea of the whole sync process being distributed across devices, even if I have a big one that has everything. That and I think I’m just used to Syncthing from playing with it already. No major issues with Nextcloud, though. It definitely has a much friendlier interface, and the Instant Upload feature in its Android app works beautifully.
I run both Nextcloud and Syncthing on my home server (Ubuntu 16.04 server). I love Nextcloud, but find Syncthing to be faster and like being able to do things such as set up a one-way “sync” to easily send files from my phone or laptop to my server.
Great guide! I settled on a completely fanless Msi Cubi N media pc, sufficiently powerful for the task and also neat looking for the living room.
I didn’t want to make off-site backups but naturally the internal SSD needs a mirror, so I put in a mechanical 2tb drive that only spins up at 4am to make a simple rsync. It also spins up temporarily whenever I make backups from any of my computers to it via Duplicati. Tempted to try Restic now too though!
That sounds nice! Thanks for sharing. A fanless media PC sounds fun :)
Hi there, I set up the same kind of stack. Using nextcloud as a front end for photo gallery and document editing. Syncthing is the tool to backup all my files across different locations seamlessly (my backups are a simple rpi+hdd+syncthing).
So if you want to cherry pick fille you use nextcloud, otherwise use syncthing.
I think you should not mention at all using plain http protocol. The only way to go is https, especially when the gui you will be accessing do basic auth.
You can give hints toward the very easy to use letsencrypt configuration tool.