How do we build disk images for Screenly?

When you download a disk image, such as Raspbian, for your Raspberry Pi or a for a firmware update for your router, you probably don’t spend too much time thinking about how it was built. Neither did I until I started working on Screenly.

I was recently in a workgroup for IoT Mark, which is a body that tries to create a certificate for good IoT devices. One of the areas IoT Mark sets out to certify is device security. This, of course, is not all that strange given the sad state of many IoT devices. As we were having these discussions, we started talking about the pipeline for creating firmware. During this discussion, I learned a lot about how many companies produce firmware. It’s often built on a developers desktop, with files copied back and forth without proper version control. While I am sure this practice is on the decline, it is still very common among companies today. Additionally, few companies talk openly about how they actually build their firmware.

We want to be transparent about how we operate and use technology in general at Screenly, and this philosophy applies to how we build our disk images for Screenly and what we do in order to build and ship secure disk images to our customers.

Before we dive into our current process, let’s quickly review where we started. In the beginning, our process was rather primitive and far from ideal. Here’s what we did for our early images for Screenly Pro:

  • Download the latest Raspbian images (and verify the download with the MD5 sum).
  • Flash it out to an SD card and boot the Raspberry Pi.
  • Copy in and run a bunch of Bash script to customize the images (harden security, etc.).
  • Install the Screenly Pro application stack and verify that it worked.
  • Shut down the devices.
  • Mount the SD card on a computer and run a script that made some final changes that couldn’t be done on the Raspberry Pi.
  • Run another script, and copy the SD card to disk and compress it. This generated an ‘official disk image’.
  • Test the disk image on a different SD card.

Since I’ve been involved with the Raspberry Pi community since the very first days, I can justifiably say that this isn’t all that rare. Many disk images are built like this for various projects. So, what’s wrong with this process?

It’s error prone

Every time a build process involves human interaction, there’s a decent chance something will go wrong. It’s easy to miss a step, and, if you don’t have the proper tests in place, this broken disk image will make it all the way to a customer. While all our scripts were version controlled (from a git repository), it was still possible to forget to pull down the latest changes.

Moreover, if the SD card that you used got corrupted in the process, it’s likely that would you get a corrupt disk image with strange errors.

It’s slow

If you look at the steps above, the process requires the use of a physical device and both writing and reading from a disk image. This process could take 30-60 minutes. Just compressing a 4GB disk image after you’ve copied it from the SD card takes a long time even on a modern computer.

It’s hard to track and control changes

Let’s say your previous version was using Raspbian from 2016-06-15 and your new image is 2016-08-12. What changed? Sure, you can probably dig out the changes somewhere, but they’re hardly easy to track.

We somewhat mitigated this for future updates by setting up our own apt mirror where we were able to lock down versions. This, of course, came with the downside of having to manage and test changes there too.

So, what’s a better solution?

The best solution is to have the process built in a fully automatic build system. Assuming that you have a separate pipeline for the actual application, the build flow would look something like this:

  • Trigger a build on the Continuous Integration (CI) system.
  • Create a disk image from scratch that copies in the application build
  • Copy the disk image, along with hashes (e.g. MD5) to a remote storage like Amazon S3 or Google Cloud Storage

(In the early days, the process for building a Raspbian image was more or less undocumented. Recently, this changed, and it’s now a lot easier. However, the tooling is still not as good as Ubuntu Core, Yocto or resin.io.)

When we started to work on Screenly 2, one of the things we wanted to make sure we developed was the ability to build these disk images in a fully automated fashion. This is possible thanks to the ubuntu-image tool, which is part of Ubuntu Core. This means that there is no longer any manual intervention in the build process.

After we build a disk image, we of course test it in order to ensure that it behaves as expected prior to “promoting” it for public use.

Where do we go from here?

We’re pretty happy with our pipeline at the moment. While it still takes some time to build these images, the building process is now done fully automatically, and everything is derived from version controlled software and no untracked changes can make it into the disk image.

In the near future, we are intending to improve this workflow further by also running automatic vulnerability scans on these disk images as part of the build pipeline. This will be done in order to proactively catch any vulnerability in the software supply chain.

It should also be said that, since we are using Ubuntu Core, devices with the disk image installed will be automatically and remotely updated whenever there is a new release.

What about Screenly OSE?

The production of disk images with Screenly OSE is unfortunately a bit less sophisticated at the moment. It’s still a very much manual process. The process for creating a disk image is outlined here. Once the image has been created, there are then separate steps for creating a NOOBS image.

If you or someone you know wants to help us improve our build flow for Screenly OSE images, let us know. We would love some community help.