Last weekend was spent at one of the world’s biggest open source conferences, FOSDEM. You can check out last year’s review to get an idea of the scale of the event. Since there’s no registration process, it’s difficult to estimate how many people attend but given how many rooms there are, and how full they are, it’s easily several thousand. I was impressed last year at how smoothly things went and the same was true this year.
The main reason to attend this time was to run a demo of MirageOS from an ARM board — one of the main advances since the previous conference. I looked over all the things we’d achieved since last year and put together a demo that showcases some of the capabilities as well as being fun.
The demo was to serve the 2048 game from a Unikernel running on a Cubieboard2 with its own access point. When people join the wifi network, they get served a static page and can begin playing the game immediately.
The components I needed for the demo were:
Code for making a static website — Since the game is completely self-contained (one html file and one js file). I only need to convert a static website into a unikernel. That’s trivial and many people have done it before.
A Cubieboard with a wifi access point — There are pre-built images on the MirageOS website, which make part of this easy. However, getting the wifi access point up involves a few more steps.
The first two pieces should be straightforward and indeed, I had a working unikernel serving the 2048 game within minutes (unix version on my laptop). The additional factors around the ARM deployment is where things were a little more involved. Although this was technically straightforward to set up, it still took a while to get all the pieces together. A more detailed description of the steps is in my fosdemo repository and in essence, it revolves around configuring the wifi access point and setting up a bridge (thanks to Mindy, Magnus and David for getting this working).
Once this was all up and running, it was a simple matter to configure the board to boot the unikernel on startup, so that no manual intervention would be required to set things up at the booth.
I gave the demo at the Xen booth and it went very well. There was a small crowd throughout my time at the booth I’m convinced that the draw of a board with glowing LEDs should not be underestimated. Many people we’re happy to connect to the access point and download the game to their browser but there were two main things I learnt.
Firstly, demos involving games will work if people actually know the game. This is obvious, but I’d assumed that most people had already played 2048 — especially the crowd I’d expect to meet at FOSDEM. It turned out that around a third of people had no idea what to do when the game loaded onto their browser. They stared blankly at it and then blankly at me. Of course, it was trivial to get them started and they were soon absorbed by it — but it still felt like some of the ‘cool-factor’ had been lost.
The second thing was that I tried to explain too much to people in much too short a time. This particular demo involved Xen unikernels, js_of_ocaml and a Cubieboard2 with a wifi access point. There’s a surprisingly large amount of technology there, which is difficult explain to a complete newcomer within one or two minutes. When it was obvious someone hadn’t heard of unikernels, I focused on the approach of library operating systems and the benefits that Mirage brings. If a visitor was already familiar with the concept of unikernels, I could describe the rest of the demo in more detail.
Everything else did go well and next time I’d like to have a demo like this running with Jitsu. That way, I could configure it so that a unikernel would spin up, serve the static page and then spin down again. If we can figure out the timing, then providing stats in the page about the lifetime of that unikernel would also be great, but that’s for another time.
One of the things we’re keen to work towards is the idea of personal clouds. It’s not a stretch to imagine that a Cubieboard2, running the appropriate software, could act as one particular node in a network of your own devices. In this instance it’s just hosting a fun and simple game but more complex applications are also possible.
Of course, there was lots more going on than just my demo and I had a great time attending the talks. Some in particular that stood out to me were those in the open source design room, which was a new addition this year. It was great to learn that there are design people out there who would like to contribute to open source (get in touch, if that’s you!). I also had a chance to meet (and thank!) Mike McQuaid in his Homebrew talk. FOSDEM is one of those great events where you can meet in person all those folks you’ve only interacted with online.
Overall, it was a great trip and I thoroughly recommend it if you’ve never been before!Comment
The mission of Nymote is to enable the creation of resilient decentralised systems that incorporate privacy from the ground up, so that users retain control of their networks and data. To achieve this, we reconsider all the old assumptions about how software is created in light of the problems of the modern, networked environment. Problems that will become even more pronounced as more devices and sensors find their way into our lives.
We want to make it simple for anyone to be able to run a piece of the cloud for their own purposes and the first three applications Nymote targets are Mail, Contacts and Calendars, but to get there, we first have to create solid foundations.
In order to create applications that work for the user, we first have to create a robust and reliable software stack that takes care of fundamental problems for us. In other words, to be able to assemble the applications we desire, we must first construct the correct building blocks.
We’ve taken a clean-slate approach so that we can build long-lasting solutions with all the benefits of hindsight but none of the baggage. As mentioned in earlier posts, there are three main components of the stack, which are: Mirage (OS for the Cloud/IoT), Irmin (distributed datastore) and Signpost (identity and connectivity) - all built using the OCaml programming language.
As you’ve already noticed, there’s a useful acronym for the above tools — MISO. Each of the projects mentioned is a serious undertaking in its own right and each is likely to be impactful as a stand-alone concept. However, when used together we have the opportunity to create applications and services with high levels of security, scalability and stability, which are not easy to achieve using other means.
In other words, MISO is the toolstack that we’re using to build Nymote — Nymote is the decentralised system that works for its users.
Each of the projects is at a different phase but they have all have made great strides over the last year.
Mirage — a library operating system that constructs unikernels — is the most mature part of the stack. I previously wrote about the Mirage 1.0 release and only six months later we had an impressive 2.0 release, with continuing advances throughout the year. We achieved major milestones such as the ability to deploy unikernels to ARM-based devices, as well as a clean-slate implementation of the transport layer security (TLS) protocol.
In addition to the development efforts, there have also been many presentations to audiences, ranging from small groups of startups all the way to prestigious keynotes with 1000+ attendees. Ever since we’ve had ARM support, the talks themselves have been delivered from unikernels running on Cubieboards and you can see the growing collection of slides at decks.openmirage.org.
All of these activities have led to a tremendous increase in public awareness of unikernels and the value they can bring to developing robust, modern software as well as the promise of immutable infrastructure. As more people look to get involved and contribute to the codebase, we’ve also begun curating a set of Pioneer Projects, which are suitable for a range of skill-levels.
You can find much more information on all the activities of 2014 in the comprehensive Mirage review post. As it’s the most mature component of the MISO stack, anyone interested in the development of code towards Nymote should join the Mirage mailing list.
Irmin — a library to persist and synchronize distributed data structures — made significant progress last year. It’s based on the principles of Git, the distributed version control system, and allows developers to choose the appropriate combination of consistency, availability and partition tolerance for their needs.
Early last year Irmin was released as an alpha with the ability to speak ‘fluent Git’ and by the summer, it was supporting user-defined merge operations and fast in-memory views. A couple of summer projects improved the merge strategies and synchronisation strategies, while an external project — Xenstore — used Irmin to add fault-tolerance.
More recent work has involved a big clean-up in the user-facing API (with nice developer documentation) and a cleaner high-level REST API. Upcoming work includes proper documentation of the REST API, which means Irmin can more easily be used in non-OCaml projects, and full integration with Mirage projects.
Signpost will be a collection of libraries that aims to provide identity and connectivity between devices. Forming efficient connections between end-points is becoming ever more important as the number of devices we own increases. These devices need to be able to recognise and reach each-other, regardless of their location on the network or the obstacles in between.
This is very much a nascent project and it involves a lot of work on underlying libraries to ensure that security aspects are properly considered. As such, we must take great care in how we implement things and be clear about any trade-offs we make. Our thoughts are beginning to converge on a design we think will work and that we would entrust with our own data, but we’re treating this as a case of ‘Here Be Dragons’. This is a critical piece of the stack and we’ll share what we learn as we chart our way towards it.
Even though we’re at the design stage of Signpost, we did substantial work last year to create the libraries we might use for implementation. A particularly exciting one is Jitsu — which stands for Just In Time Summoning of Unikernels. This is a DNS server that spawns unikernels in response to DNS requests and boots them in real-time with no perceptible lag to the end user. In other words, it makes much more efficient use of resources and significantly reduces latency of services for end-users — services are only run when they need to be, in the places they need to be.
There’s also been lots of efforts on other libraries that will help us iterate towards a complete solution. Initially, we will use pre-existing implementations but in time we can take what we’ve learned and create more robust alternatives. Some of the libraries are listed below (but note the friendly disclaimers!).
OCaml is a mature, powerful and highly pragmatic language. It’s proven ideal for creating robust systems applications and many others also recognise this. We’re using it to create all the tools you’ve read about so far and we’re also helping to improve the ecosystem around it.
One of the major things we’ve been involved with is the coordination of the OCaml Platform, which combines the OCaml compiler with a coherent set of tools and workflows to be more productive in the language and speed up development time. We presented the first major release of these efforts at OCaml 2014 and you can read the abstract or watch the video.
There’s more to come, as we continue to improve the tooling and also support the community in other ways.
Building blocks are important but we also need to push towards working applications. There are different approaches we’ve taken to this, which include building prototypes, wireframing use-cases and implementing features with other toolstacks. Some of this work is also part of a larger EU funded project* and below are brief summaries of the things we’ve done so far. We’ll expand on them as we do more over time.
Mail - As mentioned above, a prototype IMAP server exists (IMAPlet) which uses Irmin to store data. This is already able to connect to a client to serve mail. The important feature is that it’s an IMAP server which is version controlled in the backend and can expose a REST API from the mailstore quite easily.
Contacts - We first made wireframe mockups of the features we might like in a contacts app (to follow in later post) and then built a draft implementation. To get here, code was first written in OCaml and then put through the js_of_ocaml compiler. This is valuable as it takes us closer to a point where we can build networks using our address books and have the system take care of sharing details in a privacy-conscious manner and with minimal maintenance. The summary post has more detail.
Calendar - This use-case was approached in a completely different way as part of a hackathon last year. A rough but functional prototype was built over one weekend, with a team formed at the event. It was centralised but it tested the idea that a service which integrates intimately with your life (to the point of being very invasive) can provide disproportionate benefits. The experience report describes the weekend and our app — Clarity — won first place. This was great validation that the features are desirable so we need to work towards a decentralised, privacy-conscious version.
The coming year represents the best time to be working on the MISO stack and using it to make Nymote a reality. All source code is publicly available and the projects are varied enough that there is something for everyone. Browse through issues, consider the projects or simply write online and share with us the things you’d like to see. This promises to be an exciting year!
Sign up to the mailing list to keep up to date!
* NB The research leading to these results has received funding from the European Union’s Seventh Framework Programme FP7/2007-2013 under the UCN project, grant agreement no 611001.Comment
Many people have now set up unikernels for blogs, documenting their experiences for others to follow. Even more important is that people are going beyond static sites to build unikernels that provide more complicated services and solve real-world problems.
To help newcomers get started, there are now even more posts that that use different tools and target different deployment methods. Below are summaries of some of the posts I found interesting and that will make it easier for you try out different ways of creating and deploying your unikernels.
Mindy picked up where the first set of instructions finished and described her work to get an Octopress blog running on Amazon EC2. As one of the first people outside the core team to work on this, she had a lot of interesting experiences — which included getting into the Mirage networking stack to debug an issue and submit a bugfix! More recently, she also wrote a couple of excellent posts on why she uses a unikernel for her blog. These posts cover the security concerns (and responsibility) of running networked services on today’s Internet and the importance of owning your content — both ideas are at the heart of the work behind Nymote and are well worth reading.
Ian took a different path to AWS deployment by using Vagrant and Test Kitchen to to get his static site together and build his unikernel, and then Packer to create the images for deployment to EC2. All succinctly explained with code available on GitHub for others to try out!
Toby wanted to put together a blog that was a little more complicated than a traditional static site, with specific features like subdomains based on tags and the ability to set future dates for posts. He also pulled in some other libraries so he can use Mustache for sever-side rendering, where his blog posts and metadata are stored as JSON and rendered on request.
Chris saw others working to get unikernel blogs on EC2 and decide he’d try
getting his up and running on Linode instead. He is the first person to
deploy his unikernel to Linode and he provided a great walkthough with helpful
screenshots, as well as brief notes about the handful of differences compared
with EC2. Chris also wrote about the issue he had with clean urls (i.e
/about/index.html when a user visits
/about/) — he describes the
things he tried out until he was finally able to fix it.
Phil focused on getting unikernels running on a cubieboards, which are ARM based development boards — similar to the Raspberry Pi. He starts by taking Mirage’s pre-built Cubieboard images — which makes it easy to get Xen and an OCaml environment set up on the board — and getting this installed on the Cubieboard. He also noted the issues he came across along with the simple tweaks he made to fix them and finally serves a Mirage hello world page.
Static sites have become the new ‘hello world’ app. They’re simple to manage, low-risk and provide lots of opportunities to experience something new. These aspects make them ideal for discovering the benefits (and trade offs) of the unikernel approach and I look forward to seeing what variations people come up with — For instance, there aren’t any public instructions for deploying to Rackspace so it would be great to read about someone’s experiences there. However, there are also many other applications that also fit the above criteria of simplicity, low risk and plentiful learning opportunities.
Thomas Leonard decided to create a unikernel for a simple REST service for queuing package uploads for 0install. His post takes you from the very beginning, with a simple hello world program running on Xen, all the way through to creating his REST service. Along the way there a lots of code snippets and explanations of the libraries being used and what they’re doing. This is a great use-case for unikernels and there are a lot of interesting things to take from this post, for example the ease with which Thomas was able to find and fix bugs using regular tools. There’s also lots of information on performance testing and optimising of the unikernel, which he covers in a follow-up post, and he even built tools to visualise the traces.
Of course, there’s much more activity out there than described in this post as people continually propose ideas on the Mirage mailing list — both for things they would like to try out and issues they came up against. In my last post, I pointed out that the workflow is applicable to any type of unikernel and as Thomas showed, with bit of effort it’s already possible to create useful, real-world services using the many libraries that already exist. There’s also a lot of scaffolding in the mirage-skeleton repo that you can build on which makes it even easier to get involved. If you want to dive deeper into the libraries and perhaps learn OCaml, there are lots of resources online and projects to get involved with too.
Now is a great time to try building a unikernel for yourself and as you can see from the posts above, shared experiences help other people progress further and branch out into new areas. When you’ve had a chance to try something out please do share your experiences online!Comment
Mirage has reached a point where it’s possible to easily set up end-to-end toolchains to build unikernels! My first use-case is to be able to generate a unikernel which can serve my personal static site but to do it with as much automation as possible. It turns out this is possible with less than 50 lines of code.
I use Jekyll and GitHub Pages at the moment so I wanted a workflow that’s as easy to use, though I’m happy to spend some time up front to set up and configure things. The tools for achieving what I want are in good shape so this post takes the example of a Jekyll site and goes through the steps to produce a unikernel on Travis CI (a continuous integration service) which can later be deployed. Many of these instructions already exist in various forms but they’re collated here to aid this use-case.
I will take you, dear reader, through the process and when we’re finished, the workflow will be as follows:
To achieve this, we’ll first check that we can build a unikernel VM locally, then we’ll set up a continuous integration service to automatically build them for us and finally we’ll adapt the CI service to also deploy the built VM. Although the amount of code required is small, each of these steps is covered below in some detail. For simplicity, I’ll assume you already have OCaml and Opam installed – if not, you can find out how via the Real Word OCaml install instructions.
To ensure that the build actually works, you should run things locally at
least once before pushing to Travis. It’s worth noting that the
mirage-skeleton repo contains a lot of useful, public domain examples
and helpfully, the specific code we need is in
mirage-skeleton/static_website. Copy both the
dispatch.ml files from that folder into a new
_mirage folder in your
config.ml so that the two mentions of
./htdocs are replaced with
../_site. This is the only change you’ll need to make and you should now
be able to build the unikernel with the unix backend. Make sure you have
the mirage package installed by running
$ opam install mirage and then run:
$ cd _mirage $ mirage configure --unix $ make depend # needed as of mirage 1.2 $ mirage build $ cd ..
That’s all it takes! In a few minutes there will be a unikernel built on
your system (symlinked as
_mirage/mir-www). If there are any errors, make
sure that Opam is up to date and that you have the latest version of the
static_website files from mirage-skeleton.
If you’d like to see this site locally, you can do so from within the
_mirage folder by running unikernel you just built. There’s more
information about the details of this on the Mirage docs site
but the quick instructions are:
$ cd _mirage $ sudo mirage run # in another terminal window $ sudo ifconfig tap0 10.0.0.1 255.255.255.0
You can now point your browser at http://10.0.0.2/ and see your site!
Once you’re finished browsing,
$ mirage clean will clear up all the
Since the build is working locally, we can set up a continuous integration system to perform the builds for us.
We’ll be using the Travis CI service, which is free for open-source projects (so this assumes you’re using a public repo). The benefit of using Travis is that you can build a unikernel without needing a local OCaml environment, but it’s always quicker to debug things locally.
Log in to Travis using your GitHub ID which will then trigger a scan of your repositories. When this is complete, go to your Travis accounts page and find the repo you’ll be building the unikernel from. Switch it ‘on’ and Travis will automatically set your GitHub post-commit hook and token for you. That’s all you need to do on the website.
When you next make a push to your repository, GitHub will inform Travis,
which will then look for a YAML file in the root of the repo called
.travis.yml. That file describes what Travis should do and what the build
matrix is. Since OCaml is not one of the supported languages, we’ll be
writing our build script manually (this is actually easier than it sounds).
First, let’s set up the YAML file and then we’ll examine the build script.
The Travis CI environment is based on Ubuntu 12.04, with a
number of things pre-installed (e.g Git, networking tools etc). Travis
doesn’t support OCaml (yet) so we’ll use the
c environment to get the
packages we need, specifically, the OCaml compiler, Opam and Mirage. Once
those are set up, our build should run pretty much the same as it did locally.
For now, let’s keep things simple and only focus on the latest releases
(OCaml 4.01.0 and Opam 1.1.1), which means our build matrix is very simple.
The build instructions will be in the file
_mirage/travis.sh, which we
will move to and trigger from the
.travis.yml file. This means our YAML
file should look like:
language: c before_script: cd _mirage script: bash -ex travis.sh env: matrix: - MIRAGE_BACKEND=xen DEPLOY=0 - MIRAGE_BACKEND=unix
The matrix enables us to have parallel builds for different environments and
this one is very simple as it’s only building two unikernels. One worker
will build for the Xen backend and another worker will build for the Unix
_mirage/travis.sh script will clarify what each of these
environments translates to. We’ll come back to the
DEPLOY flag later on
(it’s not necessary yet). Now that this file is set up, we can work on the
build script itself.
To save time, we’ll be using an Ubuntu PPA to quickly get pre-packaged versions of the OCaml compiler and Opam, so the first thing to do is define which PPAs each line of the build matrix corresponds to. Since we’re keeping things simple, we only need one PPA that has the most recent releases of OCaml and Opam.
#!/usr/bin/env bash ppa=avsm/ocaml41+opam11 echo "yes" | sudo add-apt-repository ppa:$ppa sudo apt-get update -qq sudo apt-get install -qq ocaml ocaml-native-compilers camlp4-extra opam
[NB: There are many other PPAs for different combinations of OCaml/Opam which are useful for testing]. Once the appropriate PPAs have been set up it’s time to initialise Opam and install Mirage.
export OPAMYES=1 opam init opam install mirage eval `opam config env`
OPAMYES=1 to get non-interactive use of Opam (it defaults to ‘yes’
for any user input) and if we want full build logs, we could also set
OPAMVERBOSE=1 (I haven’t in this example).
The rest should be straight-forward and you’ll end up with an
Ubuntu machine with OCaml, Opam and the Mirage package installed. It’s now
trivial to do the next step of actually building the unikernel!
mirage configure --$MIRAGE_BACKEND mirage build
You can see how we’ve used the environment variable from the Travis file and
this is where our two parallel builds begin to diverge. When you’ve saved
this file, you’ll need to change permissions to make it executable by doing
$ chmod +x _mirage/travis.sh.
That’s all you need to build the unikernel on Travis! You should now commit both the YAML file and the build script to the repo and push the changes to GitHub. Travis should automatically start your first build and you can watch the console output online to check that both the Xen and Unix backends complete properly. If you notice any errors, you should go back over your build script and fix it before the next step.
[NB: If you have a larger site, you may have to use a different file system
option for the configuration. Specifically,
$ FS=fat mirage configure --$MIRAGE_BACKEND, which will crate a disk
image of the website content using the FAT file system format
fat1.img). This means you’ll also need to keep track of the disk image
for the deployment stage as your unikernel VM will connect to it. Look at the
nymote build script for an example.]*
When Travis has finished its builds it will simply destroy the worker and all its contents, including the unikernels we just built. This is perfectly fine for testing but if we want to also deploy a unikernel, we need to get it out of the Travis worker after it’s built. In this case, we want to extract the Xen-based unikernel so that we can later start it on a Xen-based machine (e.g Amazon, Rackspace or - in our case - a machine on Bytemark).
Since the unikernel VMs are small (only tens of MB), our method for exporting will be to commit the Xen unikernel into a repository on GitHub. It can be retrieved and started later on and keeping the VMs in version control gives us very effective snapshots (we can roll back the site without having to rebuild). This is something that would be much more challenging if we were using the ‘standard’ web toolstack.
The deployment step is a little more complex as we have to send the Travis worker a private SSH key, which will give it push access to a GitHub repository. Of course, we don’t want to expose that key by simply adding it to the Travis file so we have to encrypt it somehow.
Travis supports encrypted environment variables. Each
repository has its own public key and the Travis gem uses
this public key to encrypt data, which you then add to your
file for decryption by the worker. This is meant for sending things like
private API tokens and other small amounts of data. Trying to encrypt an SSH
key isn’t going to work as it’s too large. Instead we’ll use
travis-senv, which encodes, encrypts and chunks up the key into smaller
pieces and then reassembles those pieces on the Travis worker. We still use
the Travis gem to encrypt the pieces to add them to the
While you could give Travis a key that accesses your whole GitHub account, my preference is to create a new deploy key, which will only be used for deployment to one repository.
# make a key pair on your local machine $ cd ~/.ssh/ $ ssh-keygen -t dsa -C "travis.deploy" -f travis-deploy_dsa $ cd -
Note that this is a 1024 bit key so if you decide to use a 2048 bit key, then be aware that Travis sometimes has issues. Now that we have a key, we can encrypt it and add it to the Travis file.
# on your local machine # install the necessary components $ gem install travis $ opam install travis-senv # chunk the key, add to yml file and rm the intermediate $ travis-senv encrypt ~/.ssh/travis-deploy_dsa _travis_env $ cat _travis_env | travis encrypt -ps --add $ rm _travis_env
travis-senv encrypts and chunks the key locally on your machine, placing
its output in a file you decide (
_travis_env). We then take that output
file and pipe it to the
travis ruby gem, asking it to encrypt the input,
treating each line as separate and to be appended (
-ps) and then actually
adding that to the Travis file (
--add). You can run
$ travis encrypt -h
to understand these options. Once you’ve run the above commands,
.travis.yml will look as follows.
language: c before_script: cd _mirage script: bash -ex travis.sh env: matrix: - MIRAGE_BACKEND=xen DEPLOY=0 - MIRAGE_BACKEND=unix global: - secure: ".... encrypted data ...." - secure: ".... encrypted data ...." - secure: ".... encrypted data ...." ...
The number of secure variables added depends on the type and size of the key you had to chunk, so it could vary from 8 up to 29. We’ll commit these additions later on, alongside additions to the build script.
At this point, we also need to make a repository on GitHub
and add the public deploy key so
that Travis can push to it. Once you’ve created your repo and added a
README, follow GitHub’s instructions on adding deploy keys
and paste in the public key (i.e. the content of
Now that we can securely pass a private SSH key to the worker and have a repo that the worker can push to, we need to make additions to the build script.
Since we can set
DEPLOY=1 in the YAML file we only need to make
additions to the build script. Specifically, we want to assure that: only
the Xen backend is deployed; only pushes to the repo result in
deployments, not pull requests (we do still want builds for pull requests).
In the build script (
_mirage/travis.sh), which is being run by the worker,
we’ll have to reconstruct the SSH key and configure Git. In addition,
Travis gives us a set of useful environment variables so we’ll
use the latest commit hash (
$TRAVIS_COMMIT) to name the the VM (which also
helps us trace which commit it was built from).
It’s easier to consider this section of code at once so I’ve explained the
details in the comments. This section is what you need to add at the end of
your existing build script (i.e straight after
# Only deploy if the following conditions are met. if [ "$MIRAGE_BACKEND" = "xen" \ -a "$DEPLOY" = "1" \ -a "$TRAVIS_PULL_REQUEST" = "false" ]; then # The Travis worker will already have access to the chunks # passed in via the yaml file. Now we need to reconstruct # the GitHub SSH key from those and set up the config file. opam install travis-senv mkdir -p ~/.ssh travis-senv decrypt > ~/.ssh/id_dsa # This doesn't expose it chmod 600 ~/.ssh/id_dsa # Owner can read and write echo "Host some_user github.com" >> ~/.ssh/config echo " Hostname github.com" >> ~/.ssh/config echo " StrictHostKeyChecking no" >> ~/.ssh/config echo " CheckHostIP no" >> ~/.ssh/config echo " UserKnownHostsFile=/dev/null" >> ~/.ssh/config # Configure the worker's git details # otherwise git actions will fail. git config --global user.email "firstname.lastname@example.org" git config --global user.name "Travis Build Bot" # Do the actual work for deployment. # Clone the deployment repo. Notice the user, # which is the same as in the ~/.ssh/config file. git clone git@some_user:amirmc/www-test-deploy cd www-test-deploy # Make a folder named for the commit. # If we're rebuiling a VM from a previous # commit, then we need to clear the old one. # Then copy in both the config file and VM. rm -rf $TRAVIS_COMMIT mkdir -p $TRAVIS_COMMIT cp ../mir-www.xen ../config.ml $TRAVIS_COMMIT # Compress the VM and add a text file to note # the commit of the most recently built VM. bzip2 -9 $TRAVIS_COMMIT/mir-www.xen git pull --rebase echo $TRAVIS_COMMIT > latest # update ref to most recent # Add, commit and push the changes! git add $TRAVIS_COMMIT latest git commit -m "adding $TRAVIS_COMMIT built for $MIRAGE_BACKEND" git push origin master # Go out and enjoy the Sun! fi
At this point you should commit the changes to
./travis.yml (don’t forget
the deploy flag) and
_mirage/travis.sh and push the changes to GitHub.
Everything else will take place automatically and in a few minutes you will
have a unikernel ready to deploy on top of Xen!
[Pro-tip: If you add *
[skip ci] anywhere in your
commit message, Travis will skip the build for that commit.
This is very useful if you’re making minor changes, like updating a
Since I’m still using Jekyll for my website, I made a short script in my
jekyll repository (
_deploy-unikernel.sh) that builds the site, commits the
_site and pushes to GitHub. I simply run this after I’ve
committed a new blog post and the rest takes care of itself.
#!/usr/bin/env bash jekyll build git add _site git commit -m 'update _site' git push origin master
Congratulations! You now have an end-to-end workflow that will produce a
unikernel VM from your Jekyll-based site and push it to a repo. If you
strip out all the comments, you’ll see that we’ve written less than 50 lines
of code! Admittedly, I’m not counting the 80 or so lines that came for free
*.ml files but that’s still pretty impressive.
Of course, we still need a machine to take that VM and run it but that’s a topic for another post. For the time-being, I’m still using GitHub Pages but once the VM is hosted somewhere, I will:
Although all the tools already exist to switch now, I’m taking my time so that I can easily maintain the code I end up using.
You may have noticed that the examples here are not very flexible or extensible but that was a deliberate choice to keep them readable. It’s possible to do much more with the build matrix and script, as you can see from the Travis files on my website repo, which were based on those of the Mirage site and Mort’s site. Specifically, you can note the use of more environment variables and case statements to decide which PPAs to grab. Once you’ve got your builds working, it’s worth improving your scripts to make them more maintainable and cover the test cases you feel are important.
You might have noticed that in very few places in the toolchain above have I
mentioned anything specific to static sites per se. The workflow is simply
(1) do some stuff locally, (2) push to a continuous integration service
which then (3) builds and deploys a Xen-based unikernel. Apart from the
convenient folder structure, the specific work to treat this as a static
site lives in the
*.ml files, which I’ve skipped over for this post.
As such, the GitHub+Travis workflow we’ve developed here is quite general and will apply to almost any unikernels that we may want to construct. I encourage you to explore the examples in the mirage-skeleton repo and keep your build script maintainable. We’ll be using it again the next time we build unikernel devices.
Acknowledgements: There were lots of things I read over while writing this post but there were a few particularly useful things that you should look up. Anil’s posts on Testing with Travis and Travis for secure deployments are quite succinct (and were themselves prompted by Mike Lin’s Travis post several months earlier). Looking over Mort’s build script and that of mirage-www helped me figure out the deployment steps as well as improve my own script. Special thanks also to Daniel, Leo and Anil for commenting on an earlier draft of this post.
This post was previously published on my personal site.Comment
It’s been just over a week since FOSDEM ended and it was even more hectic than we imagined. Thousands of open source developers across dozens of rooms and speakers and lots of delicious waffles. I’m still in awe that this is a completely volunteer organised event and that everything appeared to run smoothly. Especially, since this has to be the only conference I’ve been at where the wifi was usable (and ubiquitous).
The most interesting aspect was how crowded some of the rooms became and how quickly. For example, the configuration management track was pretty much full throughout the day with a crowd of people trying to get in. I heard that the Mozilla track was equally busy as were some other devrooms. This may be in indication of relative popularity but also the sheer scale that this annual event has reached. It may be outgrowing ULB. Thankfully, videos will be available this year so I hope I can catchup up with the sessions I couldn’t get to! One that I was particularly interested in is the Xen/ARM talk in the Automotive track. Since cars are now getting smarter and Xen works on embedded devices it would be an excellent use case for Mirage to ensure that the software running in vehicles is safe and does only what it’s supposed to. There were many other Xen talks too and you catchup with them on the Xen blog.
The Mail track talks were crowded and during the postfix talk we were treated to an interesting review of spam around the globe, which was followed up by the Mailpile team announcing their alpha release on stage! The Internet of Things devroom had a number of interesting talks but there need to be more people thinking about the underlying infrastructure needs before we can begin building resilient, decentralised networks.
For the Mirage talk, Mort and Anil gave a great demo by building unikernels on stage to show the process in action. They continued the demos at the Xen stall to a number of people including some surprisingly young FOSDEM attendees. We’ll soon be moving our personal websites to become self-hosted unikernels, and from there we can build out more of the Nymote toolstack.
We’ve captured some of the interesting tweets and pictures below and hopefully next year we’ll be speaking at FOSDEM about how we’re using the Nymote toolstack.