Posted on February 18, 2020 by Rickard Nilsson

Exactly one month ago, I announced the service. Since then, there have been lots of work on functionality, performance and stability of the service. As of today, is exiting alpha and entering private beta phase. If you want to try it out, just send me an email.

Today, I’m also launching the blog, which is intended as an outlet for anything related to the service. Announcements, demos, technical articles and various tips and tricks. We’ll start out with a proper introduction of; why it was built, what it can help you with and what the long-term goals are.


Nix has great built-in support for distributing builds to remote machines. You just need to setup a standard Nix enviroment on your build machines, and make sure they are accessible via SSH. Just like that, you can offload your heavy builds to a couple of beefy build servers, saving your poor laptop’s fan from spinning up.

However, just when you’ve tasted those sweet distributed builds you very likely run into the issue of scaling.

What if you need a really big server to run your builds, but only really need it once or twice per day? You’ll be wasting a lot of money keeping that build server available.

And what if you occasionally have lots and lots of builds to run, or if your whole development team wants to share the build servers? Then you probably need to add more build servers, which means more wasted money when they are not used.

So, you start looking into auto-scaling your build servers. This is quite easy to do if you use some cloud provider like AWS, Azure or GCP. But, this is where Nix will stop cooperating with you. It is really tricky to get Nix to work nicely together with an auto-scaled set of remote build machines. Nix has only a very coarse view of the “current load” of a build machine and can therefore not make very informed decisions on exactly how to distribute the builds. If there are multiple Nix instances (one for each developer in your team) fighting for the same resources, things get even trickier. It is really easy to end up in a situation where a bunch of really heavy builds are fighting for CPU time on the same build server while the other servers are idle or running lightweight build jobs.

If you use Hydra, the continous build system for Nix, you can find scripts for using auto-scaled AWS instances, but it is still tricky to set it up. And in the end, it doesn’t work perfectly since Nix/Hydra has no notion of “consumable” CPU/memory resources so the build scheduling is somewhat hit-and-miss.

Even if you manage to come up with a solution that can handle your workload in an acceptable manner, you now have a new job: maintaining uniquely configured build servers. Possibly for your whole company.

Through my consulting company, Immutable Solutions, I’ve done a lot of work on Nix-based deployments, and I’ve always struggled with half-baked solutions to the Nix build farm problem. This is how the idea of the service was born — a service that can fill in the missing pieces of the Nix distributed build puzzle and package it as a simple, no-maintenance, cost-effective service.

Who are We? is developed and operated by me (Rickard Nilsson) and my colleague David Waern. We both have extensive experience in building Nix-based solutions, for ourselves and for various clients.

We’re bootstrapping, and we are long-term committed to keep developing and operating the service. Today, can be productively used for its main purpose — running Nix builds in a scalable and cost-effective way — but there are lots of things that can (and will) be built on top of and around that core. Read more about this below.

What does Look Like?

To the end-user, a person or team using Nix for building software, behaves just like any other remote build machine. As such, you can add it as an entry in your /etc/nix/machines file: x86_64-linux - 100 1 big-parallel,benchmark

The big-parallel,benchmark assignment is something that is called system features in Nix. You can use that as a primitive scheduling strategy if you have multiple remote machines. Nix will only submit builds that have been marked as requiring a specific system feature to machines that are assigned that feature.

The number 100 in the file above tells Nix that it is allowed to submit up to 100 simultaneous builds to Usually, you use this property to balance builds between remote machines, and to make sure that a machine doesn’t run too many builds at the same time. This works OK when you have rather homogeneous builds, and only one single Nix client is using a set of build servers. If multiple Nix clients use the same set of build servers, this simplistic scheduling breaks down, since a given Nix client loses track on how many builds are really running on a server.

However, when you’re using, you can set this number to anything really, since will take care of the scheduling and scaling on its own, and it will not let multiple Nix clients step on each other’s toes. In fact each build that runs is securely isolated from other builds and by default gets exclusive access to the resources (CPU and memory) it has been assigned.

Apart from setting up the distributed Nix machines, you need to configure SSH. When you register an account on, you’ll provide us with a public SSH key. The corresponding private key is used for connecting to This private key needs to be readable by the user that runs the Nix build. This is usually the root user, if you have a standard Nix setup where the nix-daemon process runs as the root user.

That’s all there is to it, now we can run builds using!

Let’s try building the following silly build, just so we can see some action:

let pkgs = import <nixpkgs> { system = "x86_64-linux"; };

in pkgs.runCommand "silly" {} ''
  while (($n < 12)); do
    date | tee -a $out
    sleep 10
    n=$(($n + 1))

This build will run for 2 minutes and output the current date every ten seconds:

$ nix-build silly.nix
these derivations will be built:
building '/nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv' on 'ssh://'...
Mon Feb 17 20:53:47 UTC 2020
Mon Feb 17 20:53:57 UTC 2020
Mon Feb 17 20:54:07 UTC 2020

You can see that Nix is telling us that the build is running on!

The Shell supports a simple shell interface that you can access through SSH. This shell allows you to retrieve information about your builds on the service.

For example, we can list the currently running builds:

$ ssh shell> list builds --running
10524 2020-02-17 21:05:20Z [40.95s] [Running]

We can also get information about any derivation or nix store path that has been built:> show drv /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv
  path = /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv
  builds = 1
  successful builds = 1

  out -> /nix/store/8c7sndr3npwmskj9zzp4347cnqh5p8q0-silly

  10524 2020-02-17 21:05:20Z [02:01] [Built]

This shell is under development, and new features are added continuously. A web-based frontend will also be implemented.

The Road Ahead

To finish up this short introduction to, let’s talk a bit about our long-term goals for the service.

The core purpose of is to provide Nix users with pay-per-use distributed builds that are simple to set up and integrate into any workflow. The build execution should be performant and secure.

There are a number of features that basically just are nice side-effects of the design of

  • Building a large number of variants of the same derivation (a build matrix or some sort of parameter sweep) will take the same time as running a single build, since can run all builds in parallel.

  • Running repeated builds to find issues related to non-determinism/reproducability will not take longer than running a single build.

  • A whole team/company can share the same account in letting builds be shared in a cost-effective way. If everyone in a team delegates builds to, the same derivation will never have to be built twice. This is similar to having a shared Nix cache, but avoids having to configure a cache and perform network uploads for each build artifact. Of course, can be combined with a Nix cache too, if desired.

Beyond the above we have lots of thoughts on where we want to take I’m not going to enumerate possible directions here and now, but one big area that is particularly suited for is advanced build analysis and visualisation. The sandbox that has been developed to securely isolate builds from each other also gives us a unique way to analyze exactly how a build behaves. One can imagine being able give very detailed feedback to users about build bottlenecks, performance regressions, unused dependencies etc.

With that said, our primary focus right now is to make a robust workhorse for your Nix builds, enabling you to fully embrace Nix without being limited by local compute resources. Please get in touch if you want try out, or if you have any questions or comments!