Dylan McCall https://dylanmc.ca//-/ Mon, 15 Aug 2022 18:37:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 191572867 GUADEC 2022 https://dylanmc.ca//-/blog/2022/08/10/guadec-2022/ https://dylanmc.ca//-/blog/2022/08/10/guadec-2022/#respond Wed, 10 Aug 2022 19:43:52 +0000 https://dylanmc.ca//-/?p=11245 I spent a week at GUADEC 2022 in Guadalajara, Mexico. It was an excellent conference, with some good talks, good people, and a delightful hallway track. I think everyone was excited to see each other in person after so long, and for many attendees, this was closer to home than GUADEC has ever been.

For this event, I was sponsored by the GNOME Foundation, so many thanks to them as well as my employer the Endless OS Foundation for both encouraging me to submit a talk and for giving me the opportunity to take off and drink tequila for the week.

For me, the big themes this GUADEC were information resilience, scaling our community, and how these topics fit together.

Introductions

Stepping into the Guadalajara Connectory for the first time, I couldn’t help but feel a little out of place. Everyone was incredibly welcoming, but this was still my first GUADEC, and my first real in-person event with the desktop Linux community in ages.

So, I was happy to come across Jona Azizaj and Justin Flory’s series of thoughtful and inviting workshops on Wednesday morning. These were Icebreakers & Community Social, followed by Unconscious bias & imposter syndrome workshop. They eased my anxiety enough that I wandered off and missed the follow-up (Exploring privilege dynamics workshop), but it looked like a cool session. It was a brilliant idea to put these kinds of sessions right at the start…

]]>
I spent a week at GUADEC 2022 in Guadalajara, Mexico. It was an excellent conference, with some good talks, good people, and a delightful hallway track. I think everyone was excited to see each other in person after so long, and for many attendees, this was closer to home than GUADEC has ever been.

For this event, I was sponsored by the GNOME Foundation, so many thanks to them as well as my employer the Endless OS Foundation for both encouraging me to submit a talk and for giving me the opportunity to take off and drink tequila for the week.

For me, the big themes this GUADEC were information resilience, scaling our community, and how these topics fit together.


Introductions

Stepping into the Guadalajara Connectory for the first time, I couldn’t help but feel a little out of place. Everyone was incredibly welcoming, but this was still my first GUADEC, and my first real in-person event with the desktop Linux community in ages.

So, I was happy to come across Jona Azizaj and Justin Flory’s series of thoughtful and inviting workshops on Wednesday morning. These were Icebreakers & Community Social, followed by Unconscious bias & imposter syndrome workshop. They eased my anxiety enough that I wandered off and missed the follow-up (Exploring privilege dynamics workshop), but it looked like a cool session. It was a brilliant idea to put these kinds of sessions right at the start.

The workshop about unconscious bias inspired me to consciously mix up who I was going out for lunch with throughout the week, as I realized how easy it is to create bubbles without thinking about it.

Beyond that, I attended quite a few interesting sessions. It is always fun hearing about bits of the software stack I’m unfamiliar with, so some standouts were Matthias Clasen’s Font rendering in GNOME (YouTube), and David King’s Cheese strings: Webcams, PipeWire and portals (YouTube). Both highly recommended if you are interested in those components, or in learning about some clever things!

But for the most part, this wasn’t a very code-oriented conference for me.

Accessibility, diversity, remote attendance

This was the first hybrid GUADEC after two years of running a virtual-only conference, and I think the format worked very well. The remote-related stuff was smoothly handled in the background. The volunteers in each room did a great job relaying questions from chat so remote attendees were represented during Q&As.

I did wish that those remote attendees — especially the Berlin Mini-GUADEC — were more visible in other contexts. If this format sticks, it would be nice to have a device or two set up so people in different venues can see and interact with each other during the event. After all, it is unlikely that in-person attendees will spend much time looking at chat rooms on their own.

But I definitely like how this looks. I think having good representation for remote attendees is important for accessibility. Pandemic or otherwise. So with that in mind, Robin Tafel’s Keynote: Peeling Vegetables and the Craft of (Software) Inclusivity (YouTube), struck a chord for me. She elegantly explains how making anything more accessible — from vegetable peelers to sidewalks to software — comes back to help all of us in a variety of ways: increased diversity, better designs in general, and — let’s face it — a huge number of people will need accessibility tools at some point in their lives.

“We are temporarily abled.”

Community, ecosystems, and offline content

I especially enjoyed Sri Ramkrishna’s thoughtful talk, GNOME and Sustainability – Ecosystem Management (YouTube). I came away from his session thinking how we don’t just need to recruit GNOME contributors; we need to connect free software ecosystems horizontally. Find those like-minded people in other projects and find places where we can collaborate, even if we aren’t all using GNOME as a desktop environment. For instance, I think we’re doing a great job of this across the freedesktop world, but it’s something we could think about more widely, too.

Who else benefits, or could benefit, from Meson, BuildStream, Flatpak, GJS, and the many other technologies GNOME champions? How can we advocate for these technologies in other communities and use those as bridges for each other’s benefit? How do we get their voices at events like GUADEC, and what stops us from lending our voices to theirs?

“We need to grow and feed our ecosystem, and build relations with other ecosystems.”

So I was pretty excited (mostly anxious, since I needed to use printed notes and there were no podiums, but also excited) to be doing a session with Manuel Quiñones a few hours later: Offline learning with GNOME and Kolibri (YouTube). I’ll write a more detailed blog post about it later on, but I didn’t anticipate quite how neatly our session would fit in with what other people were talking about.

At Endless, we have been working with offline content for a long time. We build custom Endless OS images designed for different contexts, with massive libraries of pre-installed educational resources. Resources like Wikipedia, books, educational games, and more: all selected to empower people with limited connectivity. The trick with offline content is it involves a whole lot of very large files, it needs to be possible to update it, and it needs to be easy to rapidly customize it for different deployments.

That becomes expensive to maintain, which is why we have started working with Kolibri.

Kolibri is an open source platform for offline-first teaching and learning, with a powerful local application and a huge library of freely licensed educational content. Like Endless OS, it is designed for difficult use cases. For example, a community with sporadic internet access can use Kolibri to share Khan Academy videos and exercises, as well as assignments for individual learners, between devices.

Using Kolibri instead of our older in-house solution means we can collaborate with an existing free software project that is dedicated to offline content. In turn, we are learning many interesting lessons as we build the Kolibri desktop app for GNOME. We hope those lessons will feed back into the Kolibri project to improve how it works on other platforms, too.

Giving our talk at GUADEC made me think about how there is a lot to gain when we bring these types of projects together.

The hallway track

Like I wrote earlier, this wasn’t a particularly code-oriented conference for me. I did sit down and poke at Break Timer for a while — in particular, reviving a branch with a GTK 4 port — and I had some nice chats about various other projects people are doing. (GNOME Crosswords was the silent star of the show). But I didn’t find many opportunities to actively collaborate on things. Something to aim for with my next GUADEC.

I wonder if the early 3pm stop each day was a bit of a contributor there, but it did make for some excellent outings, so I’m not complaining. The pictures say a lot!

Everyone here is amazing, humble and kind. I really cannot recommend enough, if you are interested in GNOME, check out GUADEC, or LAS, or another such event. It was tremendously valuable to be here and meet such a wide range of GNOME users and contributors. I came away with a better understanding of what I can do to contribute, and a renewed appreciation for this community.

]]>
https://dylanmc.ca//-/blog/2022/08/10/guadec-2022/feed/ 0 11245
My tiny file server with Ubuntu Core, Nextcloud and Syncthing https://dylanmc.ca//-/blog/2018/05/05/my-tiny-file-server-with-ubuntu-core-nextcloud-and-syncthing/ https://dylanmc.ca//-/blog/2018/05/05/my-tiny-file-server-with-ubuntu-core-nextcloud-and-syncthing/#comments Sun, 06 May 2018 04:38:27 +0000 https://dylanmc.ca//-/?p=8980 My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.

I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:

  • Safe, convenient archiving for big files.
  • Instant sync between devices for stuff I’m working on.
  • Access over LAN from home, and over the Internet from anywhere else.
  • Regular, encrypted offsite backups.
  • Compact, low power hardware that I can stick in a closet and forget about.
  • Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.
The hardware

I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around withnow and then…

]]>
My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.

I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:

  • Safe, convenient archiving for big files.
  • Instant sync between devices for stuff I’m working on.
  • Access over LAN from home, and over the Internet from anywhere else.
  • Regular, encrypted offsite backups.
  • Compact, low power hardware that I can stick in a closet and forget about.
  • Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.

The hardware

I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around with now and then.

In the end, I picked out an Intel NUC, and I threw in an old SSD and a stick of RAM:

It’s tiny, it’s quiet, and it looks okay too! (Just find somewhere to hide the power brick). My only real complaint is the wifi hardware doesn’t work with older Linux kernels, but that wasn’t a big deal for my needs and I’m sure it will work in the future.

The software

I installed Ubuntu Core 16, which is delightful. Installing it is a bit surprising for the uninitiated because there isn’t really an install process: you just clone the image to the drive you want to boot from and you’re done. It’s easier if you do this while the drive is connected to another computer. (I didn’t feel like switching around SATA cables in my desktop, so I needed to write a different OS to a flash drive, boot from that on the NUC, transfer the Ubuntu Core image to there, then dd that image to the SSD. Kind of weird for this use case).

Now that I figured out how to run it, I’ve been enjoying how this system is designed to minimize the time you need to spend with your device connected to a screen and keyboard like some kind of savage. There’s a simple setup process (configure networking, log in to your Ubuntu One account), and that’s it. You can bury the thing somewhere and SSH to it from now on. In fact, you’re pretty much forced to: you don’t even get a login prompt. Chances are you won’t need to SSH to the system anyway since it keeps itself up to date. As someone who obsesses over loose threads, I’m finding this all very satisfying.

Although, with that in mind, one important thing: if you haven’t played with Ubuntu for a while, head over to login.ubuntu.com and make sure your SSH keys are up to date. The first time I set it up, I realized I had a bunch of obsolete SSH keys in my account and I had no way to reach the system from the laptop I was using. Fixing that meant changing Ubuntu Core’s writable files from another operating system. (I would love to know if there is a better way).

The other software

Okay, using Ubuntu Core is probably a bit weird when I want to run all these servers and I’m probably a little picky, but it’s so elegant! And, happily, there are Snap packages for both Nextcloud and Syncthing. I ended up using both.

I really like how files you can edit are tucked away in /writable. For this guide,  I always refer to things by their full paths under /writable. I found thinking like that spared me getting lost in files that I couldn’t change, and it helped to emphasize the nature of this system.

DNS

Before I get to the fun stuff, there were some networking conundrums I needed to solve.

First, public DNS. My router has some buttons if you want to use a dynamic DNS service, but I just rolled my own thing. To start off, I added some additional records for my DNS pointing at my home IP address. My web host has an API for editing DNS rules, so I set up a dynamic DNS after everything else was working, but I will get to that further along.

Next, my router didn’t support hairpinning (or NAT Loopback), so requests to core.example.com were still resolving to my public IP address, which means way too many hops for sending data around. My ridiculous solution: I’ll run my own DNS server, darnit.

To get started, check the network configuration in /writable/system-data/etc/netplan/00-snapd-config.yaml. You’ll want to make sure the system requests a static IP address (I used 192.168.1.2) and uses its own nameservers. Mine looks like this:

network:
  ethernets:
    eth0:
      dhcp4: false
      dhcp6: false
      addresses: [192.168.1.2/24, '2001:1::2/64']
      gateway4: 192.168.1.1
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4]
  version: 2

After changing the Netplan configuration, use sudo netplan generate to update the system.

For the actual DNS server, we can install an unofficial snap that provides dnsmasq:

$ snap install dnsmasq-escoand

You’ll want to edit  /writable/system-data/etc/hosts so the service’s domains resolve to the devices’s local IP address:

127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

192.168.1.2 core.example.com
fe80::96c6:91ff:fe1a:6581 core.example.com

Now it’s safe to go into your router’s configuration, reserve an IP address for this device, and set it as your DNS server:

And that solved it.

To check, run tracepath from another computer on your network and the result should be something simple like this:

$ tracepath core.example.com
 1?: [LOCALHOST] pmtu 1500
 1: core.example.com 0.789ms reached
 1: core.example.com 0.816ms reached
 Resume: pmtu 1500 hops 1 back 1

While you’re looking at the router, you may as well forward some ports, too. By default you need TCP ports 80 and 443 for Nextcloud, and 22000 for Syncthing.

Nextcloud

The Nextcloud snap is fantastic. It already works out of the box: it adds a system service for its copy of Apache on port 80, and it comes with a bunch of scripts for setting up common things like SSL certificates. I wanted to use an external hard drive for its data store, so I needed to configure the mount point for that and grant the necessary permissions for the snap to access removable media.

Let’s set up that mount point first. These are configured with Systemd mount units, so we’ll want to create a file like /writable/system-data/etc/systemd/system/media-data1.mount. You need to tell it how to identify the storage device. (I always give them nice volume labels when I format them so it’s easy to use that). Note that the name of the unit file must correspond to the full name of the mount point:

[Unit]
Description=Mount unit for data1

[Mount]
What=/dev/disk/by-label/data1
Where=/media/data1
Type=ext4

[Install]
WantedBy=multi-user.target

One super cool thing here is you can start and stop the mount unit just like any other system service:

$ sudo systemctl daemon-reload
$ sudo systemctl start media-data1.mount
$ sudo systemctl enable media-data1.mount

Now let’s set up Nextcloud. The code repository for the Nextcloud snap has lots of documentation if you need.

$ snap install nextcloud
$ snap connect nextcloud:removable-media :removable-media
$ sudo snap run nextcloud.manual-install USERNAME PASSWORD
$ snap stop nextcloud

Before we do anything else we need to tell Nextcloud to store its data in /media/data1/nextcloud/, and allow access through the public domain from earlier. To do that, edit /writable/system-data/var/snap/nextcloud/current/nextcloud/config/config.php:

<?php
$CONFIG = array (
 'apps_paths' =>
 array (
 …
 ),
 …
 'trusted_domains' =>
 array (
 0 => 'localhost',
 1 => 'core.example.com'
 ),
 'datadirectory' => '/media/data1/nextcloud/data',
 …
);

Move the existing data directory to the new location, and restart the service:

$ snap stop nextcloud
$ sudo mkdir /media/data1/nextcloud
$ sudo mv /writable/system-data/var/snap/nextcloud/common/nextcloud/data /media/data1/nextcloud/
$ snap start nextcloud

Now you can enable HTTPS. There is a lets-encrypt option (for letsencrypt.org), which is very convenient:

$ sudo snap run nextcloud.enable-https lets-encrypt -d
$ sudo snap run nextcloud.enable-https lets-encrypt

At this point you should be able to reach Nextcloud from another computer on your network, or remotely, using the same domain.

Syncthing

If you aren’t me, you can probably stop here and use Nextcloud, but I decided Nextcloud wasn’t quite right for all of my files, so I added Syncthing to the mix. It’s like a peer to peer Dropbox, with a somewhat more geeky interface. You can link your devices by globally unique IDs, and they’ll find the best way to connect to each other and automatically sync files between your shared folders. It’s very elegant, but I wasn’t sure about using it without some kind of central repository. This way my systems will sync between each other when they can, but there’s one central device that is always there, ready to send or receive the newest versions of everything.

Syncthing has a snap, but it is a bit different from Nextcloud, so the package needed a few extra steps. Syncthing, like Dropbox, runs one instance for each user, instead of a monolithic service that serves many users. So, it doesn’t install a system service of its own, and we’ll need to figure that out. First, let’s install the package:

$ snap install syncthing
$ snap connect syncthing:home :home
$ snap run syncthing

Once you’re satisfied, you can stop syncthing. That isn’t very useful yet, but we needed to run it once to create a configuration file.

So, first, we need to give syncthing a place to put its data, replacing “USERNAME” with your system username:

$ sudo mkdir /media/data1/syncthing
$ sudo chown USERNAME:USERNAME /media/data1/syncthing

Unfortunately, you’ll find that the syncthing application doesn’t have access to /media/data1, and its snap doesn’t support the removable-media interface, so it’s limited to your home folder. But that’s okay, we can solve this by creating a bind mount. Let’s create a mount unit in /writable/system-data/etc/systemd/system/home-USERNAME-syncthing.mount:

[Unit]
Description=Mount unit for USERNAME-syncthing

[Mount]
What=/media/data1/syncthing/USERNAME
Where=/home/USERNAME/syncthing
Type=none
Options=bind

[Install]
WantedBy=multi-user.target

(If you’re wondering, yes, systemd figures out that it needs to mount media-data1 before it can create this bind mount, so don’t worry about that).

$ sudo systemctl daemon-reload
$ sudo systemctl start home-USERNAME-syncthing.mount
$ sudo systemctl enable home-USERNAME-syncthing.mount

Now update Syncthing’s configuration and tell it to put all of its shared folders in that directory. Open /home/USERNAME/snap/syncthing/common/syncthing/config.xml in your favourite editor, and make sure you have something like this:

<configuration version="27">
  <folder id="default" label="Default Folder" path="/home/USERNAME/syncthing/Sync" type="readwrite" rescanIntervalS="60" fsWatcherEnabled="false" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true">
    …
  </folder>
  <device id="…" name="core.example.com" compression="metadata" introducer="false" skipIntroductionRemovals="false" introducedBy="">
    <address>dynamic</address>
    <paused>false</paused>
    <autoAcceptFolders>false</autoAcceptFolders>
  </device>
  <gui enabled="true" tls="false" debugging="false">
    <address>192.168.1.2:8384</address>
    …
  </gui>
  <options>
    <defaultFolderPath>/home/USERNAME/syncthing</defaultFolderPath>
  </options>
</configuration>

With those changes, Syncthing will create new folders inside /home/USERNAME/syncthing, you can move the default “Sync” folder there as well, and its web interface will be accessible over your local network at http://192.168.1.2:8384. (I’m not enabling TLS here, for two reasons: it’s just the local network, and Nextcloud enables HSTS for the core.example.com domain, so things get confusing when you try to access it like that).

You can try snap run syncthing again, just to be sure.

Now we need to add a service file so Syncthing runs automatically. We could create a service that has the User field filled in and it always runs as a certain user, but for this type of service it doesn’t hurt to set it up as a template unit. Happily, Syncthing’s documentation provides a unit file we can borrow, so we don’t need to do much thinking here. You’ll need to create a file called /writable/system-data/etc/systemd/system/syncthing@.service:

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target

[Service]
User=%i
ExecStart=/usr/bin/snap run syncthing -no-browser -logflags=0
Restart=on-failure
SuccessExitStatus=3 4
RestartForceExitStatus=3 4

[Install]
WantedBy=multi-user.target

Note that our Exec line is a little different than theirs, since we need it to run syncthing under the snap program.

$ sudo systemctl daemon-reload
$ sudo systemctl start syncthing@USERNAME.service
$ sudo systemctl enable syncthing@USERNAME.service

And there you have it, we have Syncthing! The web interface for the Ubuntu Core system is only accessible over your local network, but assuming you forwarded port 22000 on your router earlier, you should be able to sync with it from anywhere.

If you install the Syncthing desktop client (snap install syncthing in Ubuntu, dnf install syncthing-gtk in Fedora), you’ll be able to connect your other devices to each other. On each device that you connect to this one, make sure you set core.example.com as an Introducer. That way they will discover each other through it, which saves a bit of time.

Once your devices are all connected, it’s a good idea to go to Syncthing’s web interface at http://192.168.1.2:8384 and edit the settings for each device. You can enable “Auto Accept” so whenever a device shares a new folder with core.example.com, it will be accepted automatically.

Nextcloud + Syncthing

There is one last thing I did here. Syncthing and Nextcloud have some overlap, but I found myself using them for pretty different sorts of tasks. I use Nextcloud for media files and archives that I want to store on a single big hard drive, and occasionally stream over the network; and I use Syncthing for files that I want to have locally on every device.

Still, it would be nice if I could have Nextcloud’s web UI and sharing options with Syncthing’s files. In theory we could bind mount Syncthing’s data directory into Nextcloud’s data directory, but the Nextcloud and Syncthing services run as different users. So, that probably won’t go particularly well.

Instead, it works quite well to mount Syncthing’s data directory using SSH.

First, in Nextcloud, go to the Apps section and enable the “External storage support” app.

Now you need to go to Admin, and “External storages”, and allow users to mount external storage.

Finally, go to your Personal settings, choose “External storages”, add a folder named Syncthing, and tell it connect over SFTP. Give it the hostname of the system that has Syncthing (so, core.example.com), the username for the user that is running Syncthing (USERNAME), and the path to Syncthing’s data files (/home/USERNAME/syncthing). It will need an SSH key pair to authenticate.

When you click Generate keys it will create a key pair. You will need to copy and paste the public key (which appears in the text field) to /home/USERNAME/.ssh/authorized_keys.

If you try the gear icon to the right, you’ll find an option to enable sharing for the external storage, which is very useful here. Now you can use Nextcloud to view, share, or edit your files from Syncthing.

Backups

I spun tires for a while with backups, but eventually I settled on Restic. It is fast, efficient, and encrypted. I’m really impressed with it.

Unfortunately, the snap for Restic doesn’t support strict confinement, which means it won’t work on Ubuntu Core.  So I cheated. Let’s set this up under the root user.

You can find releases of Restic as prebuilt binaries. We’ll also need to install a snap that includes curl. (Or you can download the file on another system and transfer it with scp, but this blog post is too long already).

$ snap install demo-curl
$ snap run demo-curl.curl -L "https://github.com/restic/restic/releases/download/v0.8.3/restic_0.8.3_linux_amd64.bz2" | bunzip2 > restic
$ chmod +x restic
$ sudo mkdir /root/bin
$ sudo cp restic /root/bin

We need to figure out the environment variables we want for Restic. That depends on what kind of storage service you’re using. I created a file with those variables at /root/restic-MYACCOUNT.env. For Backblaze B2, mine looked like this:

#!/bin/sh

export RESTIC_REPOSITORY="b2:core-example-com--1"
export B2_ACCOUNT_ID="…"
export B2_ACCOUNT_KEY="…"
export RESTIC_PASSWORD="…"

Next, make a list of the files you’d like to back up in /root/backup-files.txt:

/media/data1/nextcloud/data/USERNAME/files
/media/data1/syncthing/USERNAME
/writable/system-data/

I added a couple of quick little helper scripts to handle the most common things you’ll be doing with Restic:

/root/bin/restic-MYACCOUNT.sh

#!/bin/sh

. /root/restic-MYACCOUNT.env
/root/bin/restic $@

Use this as a shortcut to run restic with the correct environment variables.

/root/bin/backups-push.sh

#!/bin/sh

RESTIC="/root/bin/restic-MYACCOUNT.sh"
RESTIC_ARGS="--cache-dir /root/.cache/restic"

${RESTIC} ${RESTIC_ARGS} backup --files-from /root/backup-files.txt --exclude ".stversions" --exclude-if-present ".backup-ignore" --exclude-caches

This will ignore any directory that contains a file named “.backup-ignore”. (So to stop a directory from being backed up, you can run touch /path/to/the/directory/.backup-ignore). This is a great way to save time if you have some big directories that don’t really need to be backed up, like a directory full of, um,  Linux ISOs shifty eyes.

/root/bin/backups-clean.sh

#!/bin/sh

RESTIC="/root/bin/restic-MYACCOUNT.sh"
RESTIC_ARGS="--cache-dir /root/.cache/restic"

${RESTIC} ${RESTIC_ARGS} forget --keep-daily 7 --keep-weekly 8 --keep-monthly 12 --prune
${RESTIC} ${RESTIC_ARGS} check

This will periodically remove old snapshots, prune unused blocks, and then check for errors.

Make sure all of those scripts are executable:

$ sudo chmod +x /root/bin/restic-MYACCOUNT.sh
$ sudo chmod +x /root/bin/restic-push.sh
$ sudo chmod +x /root/bin/restic-clean.sh

We still need to add systemd stuff, but let’s try this thing first!

$ sudo /root/bin/restic-MYACCOUNT.sh init
$ sudo /root/bin/backups-push.sh
$ sudo /root/bin/restic-MYACCOUNT.sh snapshots

Have fun playing with Restic, try restoring some files, note that you can list all the files in a snapshot and restore specific ones. It’s a really nice little backup tool.

It’s pretty easy to get systemd helping here as well. First let’s add our service file. This is a different kind of system service because it isn’t a daemon. Instead, it is a oneshot service. We’ll save it as /writable/system-data/etc/systemd/system/backups-task.service.

[Unit]
Description=Regular system backups with Restic

[Service]
Type=oneshot
ExecStart=/bin/sh /root/bin/backups-push.sh
ExecStart=/bin/sh /root/bin/backups-clean.sh

Now we need to schedule it to run on a regular basis. Let’s create a systemd timer unit for that: /writable/system-data/etc/systemd/system/backups-task.timer.

[Unit]
Description=Run backups-task daily

[Timer]
OnCalendar=09:00 UTC 
Persistent=true

[Install]
WantedBy=timers.target

One gotcha to notice here: with newer versions of systemd, you can use time zones like PDT or America/Vancouver for the OnCalendar entry, and you can test how that will work using systemd-analyze, like systemd-analyze calendar "09:00 America/Vancouver". Alas, that is not the case in Ubuntu Core 16, so you’ll probably have the best luck using UTC and calculating timezones yourself.

Now that you have your timer and your service, you can test the service by starting it:

$ sudo systemctl start backups-task.service
$ sudo systemctl status backups-task.service

If all goes well, enable the timer:

$ sudo systemctl start backups-task.timer
$ sudo systemctl enable backups-task-timer

To see your timer, you can use systemctl list-timers:

$ sudo systemctl list-timers
…
Sat 2018-04-28 09:00:00 UTC 3h 30min left Fri 2018-04-27 09:00:36 UTC 20h ago backups-task.timer backups-task.service
…

Some notes on security

Some people (understandably) dislike running this kind of web service on port 80. Nextcloud’s Apache instance runs on port 80 and port 443 by default, but you can change that using snap set nextcloud ports.http=80 ports.https=443.  However, you may need to generate a self-signed SSL certificate in that case.

Nextcloud (like any daemon installed by Snappy) runs as root, but, as a snap, it is confined to a subset of the system. There is some official documentation about security and sandboxing in Ubuntu Core if you are interested. You can always run sudo snap run --shell nextcloud.occ to get an idea of what it has access to.

If you feel paranoid about how we gave Nextcloud access to all removable media, you can create a bind mount from /writable/system-data/var/snap/nextcloud/common/nextcloud to /media/data1/nextcloud, like we did for Syncthing, and snap disconnect nextcloud:removable-media. Now it only has access to those files on the other end of the bind mount.

Conclusion

So that’s everything!

This definitely isn’t a tiny amount of setup. It took an afternoon. (And it’ll probably take two or three years to pay for itself). But I’m impressed by how smoothly it all went, and with a few exceptions where I was nudged into loopy workarounds,  it feels simple and reproducible. If you’re looking at hosting more of your own files, I would happily recommend something like this.

]]>
https://dylanmc.ca//-/blog/2018/05/05/my-tiny-file-server-with-ubuntu-core-nextcloud-and-syncthing/feed/ 10 8980
GNOME Break Timer: Final report https://dylanmc.ca//-/blog/2013/09/22/gnome-break-timer-final-report/ https://dylanmc.ca//-/blog/2013/09/22/gnome-break-timer-final-report/#comments Mon, 23 Sep 2013 02:42:19 +0000 https://dylanmc.ca//-/?p=1627 Well, it’s September, so I guess it’s time to call it quits with that whole “summer” thing. This has been a really nice few months. I’m very grateful that I could participate in Google Summer of Code this year with my project to build a shiny new Break Timer application for GNOME 3.

This was meant to be a picture of Fall's first day of torrential wind and rain, but the rain stopped as soon as I went outside and this is all I got. Stupid rain.This was meant to be a picture of Fall’s first day of torrential wind and rain, but the rain stopped as soon as I went outside and this is all I got. Stupid rain.

So, where am I leaving you? With GNOME Break Timer 1.1, of course! (And I’m not leaving). I think my project over the summer has been successful. At times I have had the unmistakeable feeling that I was trying to spread too little butter over too much bread, but we always found something interesting to work on (including a nifty and GNOMEy side project that I’ll talk about really soon, but mostly on Break Timer itself) and I think we have some good quality code as a result — and a lovely little application, too!

GNOME Break Timer 1.1 Well, it has an About dialogWell, it has an About dialog ]]>
Well, it’s September, so I guess it’s time to call it quits with that whole “summer” thing. This has been a really nice few months. I’m very grateful that I could participate in Google Summer of Code this year with my project to build a shiny new Break Timer application for GNOME 3.

This was meant to be a picture of Fall's first day of torrential wind and rain, but the rain stopped as soon as I went outside and this is all I got. Stupid rain.
This was meant to be a picture of Fall’s first day of torrential wind and rain, but the rain stopped as soon as I went outside and this is all I got. Stupid rain.

So, where am I leaving you? With GNOME Break Timer 1.1, of course! (And I’m not leaving). I think my project over the summer has been successful. At times I have had the unmistakeable feeling that I was trying to spread too little butter over too much bread, but we always found something interesting to work on (including a nifty and GNOMEy side project that I’ll talk about really soon, but mostly on Break Timer itself) and I think we have some good quality code as a result — and a lovely little application, too!

GNOME Break Timer 1.1

Well, it has an About dialog
Well, it has an About dialog

Don’t worry, that icon is a quick placeholder, and I realize it looks confusingly similar to either a normal clock, a speaker or a power button. If you feel strongly about it, I will be eternally grateful if you check out the art request for a new icon.

Here’s what I did this summer, in summary…

  • Cleaned up a lot of old code, fixing bugs and removing oodles of unwanted complexity.
  • Adopted a “normal” build system and fought off my intense fear of Automake. (I now simply dislike Automake. That feels like progress).
  • Made a cute little application to get started with Break Timer, view the current break status, and set a break schedule. I think it’s pretty cool.
  • Improved the activity tracking code so it’ll be way easier to adapt to changes in the input stack. I still need to take a close look at how this will work under Wayland, but I’m less worried, at least.
  • Polished up the “take a break” notifications and added automatic screen locking, as well as better awareness of the system in general.
  • Implemented really awesome state saving between sessions.
  • Investigated per-application defaults for notification appearance. (Didn’t go brilliantly, but I’m going to try again. More on that later).
  • Wrote lots of tests. I didn’t get to write any UI tests, and I was hoping to find out about testing timeouts and timers but I’ll need to save that for another day. Probably a rainy one. Still, it should be very hard for someone to (unknowingly) break any of the more fiddly parts of the application. I’m sure that will pay off in the long run.
  • Learned all about GObject, Vala, Cairo, unit tests, GNOME, and wonderful new things in GTK!
  • And I wrote a blog post for each of those things.

All sorts of people have helped me with my project over the summer. Thanks, Jasper and Allan for being so patient with me :) And thanks, GNOME! You folks are all brilliant. I’m definitely going to keep going with this project and I’m excited to work with you all in the future.

]]>
https://dylanmc.ca//-/blog/2013/09/22/gnome-break-timer-final-report/feed/ 6 1627
GNOME Break Timer: Week 13 https://dylanmc.ca//-/blog/2013/09/13/gnome-break-timer-week-13/ https://dylanmc.ca//-/blog/2013/09/13/gnome-break-timer-week-13/#comments Sat, 14 Sep 2013 02:36:37 +0000 https://dylanmc.ca//-/?p=1554 I’m nearing the end of a very busy few weeks, and getting very close to that soft pencils down date! With school starting up again this hasn’t been my most productive week on the GNOME Break Timer front, but I’m pretty happy with what’s been done.

First, most importantly, Jasper and the GNOME admins helped me get to set up on gnome.org’s infrastructure! This is really exciting to me, because hosting and bug tracking looked like a crazy jumble throughout my project, and they got it all sorted out very efficiently. This feels a lot more real now, somehow, and I feel like I’m in a better position to continue maintaining this for a long time.

So, here are the important links:

]]>
I’m nearing the end of a very busy few weeks, and getting very close to that soft pencils down date! With school starting up again this hasn’t been my most productive week on the GNOME Break Timer front, but I’m pretty happy with what’s been done.

First, most importantly, Jasper and the GNOME admins helped me get to set up on gnome.org’s infrastructure! This is really exciting to me, because hosting and bug tracking looked like a crazy jumble throughout my project, and they got it all sorted out very efficiently. This feels a lot more real now, somehow, and I feel like I’m in a better position to continue maintaining this for a long time.

So, here are the important links:

Incidentally: l10n.gnome.org, you’re awesome. I noticed a bunch of translations committed before I even knew gnome-break-timer was up there, and I was blown away.

What else is new? Unit tests, bits and pieces for maintainability (including code format and documentation), and some visual fun for the status panel.

GNOME Clocks has a really new cool widget for its countdowns and timers. I went ahead and borrowed that design to replace the very boring (and repetitive) icons we had before. I think this helps to quickly get across what’s going on. Also, I’m just a fan of common UI elements.

We show how close a break is like how GNOME Clocks shows a countdown
We show how close a break is like how GNOME Clocks shows a countdown

I also added some arrows, like the ones in Alan’s early mockups. This was all really fun: I hadn’t really explored Cairo before, and I was very impressed with how easy it was to get nice looking curves drawing on the screen. It took a bit of tweaking to get that arrow arranged neatly, without overlapping the text (ever), but I think I got it where I needed. I guess I’ll wait and see if anyone manages to break it.

The arrows probably aren't exactly necessary (I hope), but they add a bit of whimsy that I find quite appealing
The arrows probably aren’t exactly necessary (I hope), but they add a bit of whimsy that I find quite appealing

Over the weekend I’m going to be busy with yet more stuff my past self went and volunteered me for, but I’ll be back soon with some more progress. (Also, I promise one of those distractions is a really awesome charity web project that I’m very excited about. I’ll be able to show it off in the start of November, and I honestly can’t wait). I’m down to “nice to have” features at this point, and the next one is collecting some basic statistics like how many breaks you ignored (or didn’t ignore) last week. Of course, this isn’t so the application can label anyone a bad person. Instead, I’m hoping this will make way for simple, positive and helpful messages in the status dialog. A lot of that could use extra design work, but at least having the data in place will be a nice start – and I’ll certainly be giving it a shot anyway.

Other than that, I’m going to be improving the experience for translators with notes for some of the weirder strings in the application. A few more unit tests, some documentation, a placeholder icon, and a 1.0 release. Hooray!

Since I’ve been working on a lot of cleanup already, and the last few weeks were a bit slow, I’ll be working on code past Monday the 16th’s soft pencils down date (at risk of some panic near the end). Of course, that isn’t terribly important: I look forward to maintaining and improving this well into the future, too.

]]>
https://dylanmc.ca//-/blog/2013/09/13/gnome-break-timer-week-13/feed/ 3 1554
GNOME Break Timer: Week 10 https://dylanmc.ca//-/blog/2013/08/26/gnome-break-timer-week-10/ https://dylanmc.ca//-/blog/2013/08/26/gnome-break-timer-week-10/#comments Mon, 26 Aug 2013 19:22:44 +0000 https://dylanmc.ca//-/?p=1535 Earlier, when I mentioned that I was going to work on polish, I was thinking of a rather unfortunate line in my original roadmap that honestly just said “polish.” I started this chunk of my GSoC project with a new list of things I have been putting off for said polish phase, and it turned out to be quite substantial. I have several cool new things this week.

First, I added a simple, custom container widget that allocates space for all of it children – whether or not they are visible – and distributes that space among the visible children. This way, our Break Schedule dialog is always the same shape, even though its contents change. This is sort of like using a GtkStack with the homogeneous property set to true. Once I wrapped my head around size requests and allocations and requisitions, it was really easy to implement. I’m not sure if it’s correct, strictly speaking, but it works for me.


class FixedSizeGrid : Gtk.Grid { public FixedSizeGrid() { Object(); } public override void adjust_size_request (Gtk.Orientation orientation, ref int minimum_size, ref int natural_size) { foreach (Gtk.Widget widget in this.get_hidden_children()) { int widget_allocated_size = 0; if (orientation == Gtk.Orientation.VERTICAL && this.orientation == Gtk.Orientation.VERTICAL) { widget_allocated_size = widget.get_allocated_height(); } else if (orientation == Gtk.Orientation.HORIZONTAL && this.orientation == Gtk.Orientation.HORIZONTAL) { widget_allocated_size = widget.get_allocated_width(); } minimum_size += widget_allocated_size; natural_size += widget_allocated_size; widget.adjust_size_request(orientation, ref minimum_size, ref natural_size); } base.adjust_size_request(orientation, ref minimum_size, ref natural_size); } private List get_hidden_children() { var hidden_children = new List(); foreach (Gtk.Widget widget in this.get_children()) { if (! widget.is_visible()) hidden_children.append(widget); } return hidden_children; } }

The big thing I worked on over the last two weeks was making the break timer service remember its state between sessions. Break Timer will remember if you need to take a break after you log out or restart the computer, and in the future it will be able to keep track of interesting statistics very easily. This also makes it much, much harder to “cheat,” by accident or intentionally, because it will alwayspick up where it left off…

]]>
Earlier, when I mentioned that I was going to work on polish, I was thinking of a rather unfortunate line in my original roadmap that honestly just said “polish.” I started this chunk of my GSoC project with a new list of things I have been putting off for said polish phase, and it turned out to be quite substantial. I have several cool new things this week.

First, I added a simple, custom container widget that allocates space for all of it children – whether or not they are visible – and distributes that space among the visible children. This way, our Break Schedule dialog is always the same shape, even though its contents change. This is sort of like using a GtkStack with the homogeneous property set to true. Once I wrapped my head around size requests and allocations and requisitions, it was really easy to implement. I’m not sure if it’s correct, strictly speaking, but it works for me.

class FixedSizeGrid : Gtk.Grid {
	public FixedSizeGrid() {
		Object();
	}

	public override void adjust_size_request (Gtk.Orientation orientation, ref int minimum_size, ref int natural_size) {
		foreach (Gtk.Widget widget in this.get_hidden_children()) {
			int widget_allocated_size = 0;

			if (orientation == Gtk.Orientation.VERTICAL && this.orientation == Gtk.Orientation.VERTICAL) {
				widget_allocated_size = widget.get_allocated_height();
			} else if (orientation == Gtk.Orientation.HORIZONTAL && this.orientation == Gtk.Orientation.HORIZONTAL) {
				widget_allocated_size = widget.get_allocated_width();
			}

			minimum_size += widget_allocated_size;
			natural_size += widget_allocated_size;

			widget.adjust_size_request(orientation, ref minimum_size, ref natural_size);
		}

		base.adjust_size_request(orientation, ref minimum_size, ref natural_size);
	}

	private List get_hidden_children() {
		var hidden_children = new List();
		foreach (Gtk.Widget widget in this.get_children()) {
			if (! widget.is_visible()) hidden_children.append(widget);
		}
		return hidden_children;
	}
}

The big thing I worked on over the last two weeks was making the break timer service remember its state between sessions. Break Timer will remember if you need to take a break after you log out or restart the computer, and in the future it will be able to keep track of interesting statistics very easily. This also makes it much, much harder to “cheat,” by accident or intentionally, because it will always pick up where it left off.

I decided to store the program’s state in the user’s cache folder using a json file, since we can work with json-glib quite nicely from Vala code. That happens when the process is (cleanly) interrupted or when gnome-session says it’s time to stop. (Thanks to Guilhem Bonnefille on gnome-devel-list for pointing me in the right direction about gnome-session). When the application starts, it reads that cache file and updates all its timers and counters accordingly. The way I implemented this leaves the door wide open for keeping breaks in sync between systems, over the network, which would be really cool.

I ran some finicky problems working on this; I guess I don’t usually have to work with quite as many moving parts. I was very glad that several years of people extolling the virtues of test-driven development (most recently David Cameron’s awesome Quality Assurance course at SFU) finally rubbed off on me: as soon as I wrote a test case for the problem I was working on, it all got way easier. And I got a (slightly clunky) test case for free! I must remember to do more of that.

]]>
https://dylanmc.ca//-/blog/2013/08/26/gnome-break-timer-week-10/feed/ 2 1535
GNOME Break Timer: Week 8 https://dylanmc.ca//-/blog/2013/08/13/gnome-break-timer-week-8/ Wed, 14 Aug 2013 01:12:34 +0000 https://dylanmc.ca//-/?p=1520 Seriously regretting my boring choice of titles for these blog posts, but it’s too late to change it now.

Why, hello there! The last two weeks haven’t been the brilliantest for my work on GNOME Break Timer – partly because all my other unrelated projects, which I’ve been mostly ignoring in favour of Break Timer, have suddenly flared up and demanded attention – but I still got some nice stuff done. And I passed my statistics course and almost finished a cool charity website. (More on that soon, I hope?).

Most importantly, I decided that it’s time to add some tests. I added a test folder to the build system, and after bracing myself for a nightmare I was really impressed with how easy it was to get going. Further evidence that my fear of tests is entirely irrational.

With all the GNOME project automake stuff set up, I just needed to build a test runner using GLib’s test framework and list it in the TEST_PROGS variable. make check is figured out all on its own. So, that was lovely!

While everyone was off having fun at GUADEC, I fooled around writing unit tests. Of course, I did run into some trickiness: most of my time was spent making the application more testable (cursing Automake some more, putting everything in noinst_LTLIBRARIES), and fixing bugs that I encountered in the process of writing unit tests. This is all for a good cause, though: I am feeling more and more confident that the bus factor for this project can increase beyond 1…

]]>
Seriously regretting my boring choice of titles for these blog posts, but it’s too late to change it now.

Why, hello there! The last two weeks haven’t been the brilliantest for my work on GNOME Break Timer – partly because all my other unrelated projects, which I’ve been mostly ignoring in favour of Break Timer, have suddenly flared up and demanded attention – but I still got some nice stuff done. And I passed my statistics course and almost finished a cool charity website. (More on that soon, I hope?).

Most importantly, I decided that it’s time to add some tests. I added a test folder to the build system, and after bracing myself for a nightmare I was really impressed with how easy it was to get going. Further evidence that my fear of tests is entirely irrational.

With all the GNOME project automake stuff set up, I just needed to build a test runner using GLib’s test framework and list it in the TEST_PROGS variable. make check is figured out all on its own. So, that was lovely!

While everyone was off having fun at GUADEC, I fooled around writing unit tests. Of course, I did run into some trickiness: most of my time was spent making the application more testable (cursing Automake some more, putting everything in noinst_LTLIBRARIES), and fixing bugs that I encountered in the process of writing unit tests. This is all for a good cause, though: I am feeling more and more confident that the bus factor for this project can increase beyond 1.

Of course, I didn’t write unit tests for everything. That would be lovely, but it could also take quite a long time. (Quicker now, of course, since all the kinks have been worked out). Instead, I focused on parts of the application that I have broken by accident in the past: monitoring activity, and triggering breaks. The application uses many global things like system time, as well as timers and timeouts. Those can all be rather troublesome to test, unfortunately, but I found my way around them. I created a custom g_get_real_time function that will either return the actual time or a time set by the test suite, so we can rigorously test how certain objects behave as the time changes.

Most of this is quite boring, but I’m happy with it anyway. I wasn’t thrilled with glib’s syntax for writing a test suite – I’m used to wrapping these things in objects – so I borrowed the TestSuite and TestCase classes from libgee’s test suite, adding some extra twists.

Here is tests.vala, which is used by all of the test runners:

// GLib's TestSuite and TestCase are compact classes, so we wrap them in real GLib.Objects for convenience
// This base code is partly borrowed from libgee's test suite, at https://git.gnome.org/browse/libgee

public abstract class SimpleTestSuite : Object {
	private GLib.TestSuite g_test_suite;
	private Adaptor[] adaptors = new Adaptor[0];

	private class Adaptor {
		private SimpleTestSuite test_suite;
		private SimpleTestCase test;

		public Adaptor(SimpleTestSuite test_suite, owned SimpleTestCase test) {
			this.test_suite = test_suite;
			this.test = (owned)test;
		}

		private string get_short_name() {
			string base_name = this.test_suite.get_name();
			string test_full_name = this.test.get_name();
			if (test_full_name.has_prefix(base_name)) {
				return test_full_name.splice(0, base_name.length);
			} else {
				return test_full_name;
			}
		}

		private void setup(void *fixture) {
			this.test_suite.setup();
		}

		private void run(void *fixture) {
			this.test.run(this.test_suite);
		}

		private void teardown(void *fixture) {
			this.test_suite.teardown();
		}

		public GLib.TestCase get_g_test_case() {
			return new GLib.TestCase(
				this.get_short_name(),
				(TestFixtureFunc)this.setup,
				(TestFixtureFunc)this.run,
				(TestFixtureFunc)this.teardown
			);
		}
	}

	public SimpleTestSuite() {
		var name = this.get_name();
		this.g_test_suite = new GLib.TestSuite(name);
	}

	public void add_to(GLib.TestSuite parent) {
		parent.add_suite(this.g_test_suite);
	}

	public GLib.TestSuite get_g_test_suite() {
		return this.g_test_suite;
	}

	public string get_name() {
		return this.get_type().name();
	}

	public void add_test(owned SimpleTestCase test) {
		var adaptor = new Adaptor(this, (owned)test);
		this.adaptors += adaptor;
		this.g_test_suite.add(adaptor.get_g_test_case());
	}

	public virtual void setup() {
	}

	public virtual void teardown() {
	}
}

public interface SimpleTestCase : Object {
	public abstract void run(T context);

	public void add_to(SimpleTestSuite test_suite) {
		test_suite.add_test(this);
	}

	public string get_name() {
		return this.get_type().name();
	}
}

class TestRunner : Object {
	private GLib.TestSuite root_suite;

	private File tmp_dir;
	const string SCHEMA_FILE_NAME = "org.gnome.break-timer.gschema.xml";

	public TestRunner(ref unowned string[] args, GLib.TestSuite? root_suite = null) {
		GLib.Test.init(ref args);
		if (root_suite == null) {
			this.root_suite = GLib.TestSuite.get_root();
		} else {
			this.root_suite = root_suite;
		}
	}

	public void add(SimpleTestSuite suite) {
		suite.add_to(this.root_suite);
	}

	public virtual void global_setup() {
		try {
			var tmp_path = DirUtils.make_tmp("gnome-break-timer-test-XXXXXX");
			tmp_dir = File.new_for_path(tmp_path);
		} catch (Error e) {
			GLib.warning("Error creating temporary directory for test files: %s".printf(e.message));
		}

		string target_data_path = Path.build_filename(tmp_dir.get_path(), "share");
		string target_schema_path = Path.build_filename(tmp_dir.get_path(), "share", "glib-2.0", "schemas");

		Environment.set_variable("GSETTINGS_BACKEND", "memory", true);

		var original_data_dirs = Environment.get_variable("XDG_DATA_DIRS");
		Environment.set_variable("XDG_DATA_DIRS", "%s:%s".printf(target_data_path, original_data_dirs), true);

		File source_schema_file = File.new_for_path(
			Path.build_filename(get_top_builddir(), "data", SCHEMA_FILE_NAME)
		);

		File target_schema_dir = File.new_for_path(target_schema_path);
		try {
			target_schema_dir.make_directory_with_parents();
		} catch (Error e) {
			GLib.warning("Error creating directory for schema files: %s", e.message);
		}

		File target_schema_file = File.new_for_path(
			Path.build_filename(target_schema_dir.get_path(), SCHEMA_FILE_NAME)
		);

		try {
			source_schema_file.copy(target_schema_file, FileCopyFlags.OVERWRITE);
		} catch (Error e) {
			GLib.warning("Error copying schema file: %s", e.message);
		}

		int compile_schemas_result = Posix.system("glib-compile-schemas %s".printf(target_schema_path));
		if (compile_schemas_result != 0) {
			GLib.warning("Could not compile schemas in %s", target_schema_path);
		}
	}

	public virtual void global_teardown() {
		if (tmp_dir != null) {
			var tmp_dir_path = tmp_dir.get_path();
			int delete_tmp_result = Posix.system("rm -rf %s".printf(tmp_dir_path));
			if (delete_tmp_result != 0) {
				GLib.warning("Could not delete temporary files in %s", tmp_dir_path);
			}
		}
	}

	public int run() {
		this.global_setup();
		GLib.Test.run();
		this.global_teardown();
		return 0;
	}

	private static string get_top_builddir() {
		var builddir = Environment.get_variable("top_builddir");
		if (builddir == null) builddir = "..";
		return builddir;
	}
}

And here’s a really simple test suite and test runner:

public class test_Example : SimpleTestSuite {
	public string? foo;

	public test_Example() {
		new test_example_foo_is_bar().add_to(this);
	}

	public override void setup() {
		this.foo = "bar";
	}
}

class test_example_foo_is_bar : Object, SimpleTestCase<test_Example> {
	public void run(test_Example context) {
		assert(context.foo == "bar");
	}
}

public static int main(string[] args) {
	var runner = new TestRunner(ref args);
	runner.add(new test_Example());
	return runner.run();
}

Of course, you might note that this still isn’t thread-safe since we’re passing the same test_Example instance as the parameter for all of our SimpleTestCases, and the syntax is slightly unusual, but I’m quite fond of the extra brevity. One nice bit is this figures out the name of each test based on GObject type information, so we never need to write it explicitly. The test runner ultimately says that a test named “/test_Example/test_example_foo_is_bar” has passed, and it can deal with all sorts of stuff well away from the test code. It’s worked well so far, anyway.

So, that’s about it for the last two weeks. I also submitted an art request for some new icons, and I’m going to try following up on that where I can. This is definitely in the “polish” phase – just with a lot yet to be polished.

]]>
1520