Faster and cheaper NixOS integration tests with containers

Faster and cheaper NixOS integration tests with containers
AI-generated image. Human-written content.
📆 Fri Mar 13 2026 by Jacek Galowicz
(6 min. reading time)

The NixOS integration test framework is one of the killer features of the Nix ecosystem. As we explored in part 1 and part 2 of our test driver blog series, it allows developers to declaratively spin up entire virtual networks, run complex test scripts in Python, and verify system behavior reliably in a strictly isolated environment - comparably faster than other test frameworks and much easier to set up.

However, until now, this framework has relied exclusively on QEMU virtual machines. QEMU provides excellent isolation, but hardware virtualization comes with an undeniable cost: Booting many VMs requires the runtime and memory overhead of virtualizing a whole guest system with kernel, hardware models, etc., and requires hardware virtualization support to run fast. This puts a strain on the build infrastructure that runs these on scale and slows down CI pipelines. On top of that, we typically don’t even need tight VM isolation in tests (at least not for security). It practically makes tests slower that don’t really need virtualisation and introduces the constraint that our Nix builders need to provide KVM support, which increases infrastructure cost.

Today, we are thrilled to announce a lightweight, blazing-fast alternative built directly into the NixOS test driver: The new systemd-nspawn-based container backend for integration tests! Want to see how to run tests with multiple NixOS hosts within just a few seconds? Keep reading.

Why Container Tests Change the Game

The Official systemd Logo

We already had a look at running NixOS from any Linux distro in systemd-nspawn containers in an earlier blog post. For testing, containers have significant advantages over full virtual machines:

  • Faster & Lighter on memory: Container tests boot significantly faster. We have seen up to 25% improvement in test execution speed (it’s not that simple with complex tests, and the VMs could also be further optimized, but this number turned out to be pretty valid in many tests). They also require less memory, which makes it feasible to run hundreds of them at the same time.
  • Cheaper infrastructure: Because containers use the host kernel and avoid virtualization overhead, you can run massive test suites on cheap VMs. Before, we needed KVM support, which implies more expensive bare-metal machines or specialized VM instances with nested virtualization support.
  • Simplified hardware pass through (e.g., CUDA/GPUs): We can passthrough PCI devices like GPUs to virtual machines, but this way of passing them into a VM is exclusive. Containers easily allow direct bind-mounting of host device nodes to share them between the host and multiple containers. This simplifies testing hardware-dependent code like CUDA and other GPU workloads.

From a Customer Need to a Community Feature

This new backend was driven by a very specific, real-world requirement. One of our customers needed to run NixOS integration tests for their production services that rely on GPUs for AI workloads. Because traditional PCI-slot-based hardware passthrough to QEMU virtual machines was not a fit within their requirements, testing these GPU-accelerated services within the standard framework was a major roadblock.

Together, we discovered that extending the NixOS test driver with container support would be the ideal solution. Since containers can easily bind-mount host device nodes, this approach would grant the test environment direct access to the required GPU hardware.

Recognizing that this would not only support their internal projects forward but also be highly useful to the broader community, the customer contracted us to build and upstream the feature. We implemented the systemd-nspawn backend and guided it through the upstreaming process into nixpkgs. The customer was able to seamlessly test their GPU services without the burden of maintaining a custom fork, and the wider Nix ecosystem gained a powerful, lightweight testing alternative.

Writing a container test

Note: To try the container backend, please use our nixpkgs branch nixos-test-containers in https://github.com/applicative-systems/nixpkgs. We expect these changes to be merged very soon and this note will disappear after that.

Migrating your existing tests to use containers is simple. You do not have to rewrite your test logic; you just flip a switch in the declarative infrastructure part of your Nix code. Instead of using nodes, use containers:

{
  name = "ping test";

  # We're using `containers` instead of `nodes` here!
  # Nothing else changes
  containers = {
    machine1 = { };
    machine2 = { };
  };

  testScript = ''
    start_all()

    machine1.succeed("ping -c 1 machine2")
    machine2.succeed("ping -c 1 machine1")
  '';
}

Check out this test in action, completing almost instantly (you can also view the full git repository of this ping example):

As you can see, this test boots up 2 containers, lets them ping each other and tears them down within roughly 3 seconds!

Configuring the Nix daemon for container tests

Because containers utilize Linux namespaces instead of full virtual machines, your host machine’s Nix daemon needs permission to allocate user IDs and manage cgroups. Before running a container test, ensure your nix.conf or NixOS configuration includes the following settings:

{
  nix.settings = {
    auto-allocate-uids = true;
    experimental-features = [ "auto-allocate-uids" "cgroups" ];
    extra-system-features = [ "uid-range" ];

    # Only needed for networking between VMs and containers
    sandbox-paths = [ "/dev/net" ];
  }
}

Try it yourself

Want to see exactly how this was built under the hood, or take it for a spin? You can review the complete implementation and community discussion in the official container PR on GitHub, as well as the accompanying documentation PR.

Limitations

The systemd-nspawn based backend is not a silver bullet that completely replaces QEMU. Containers use the host kernel, which means they come with specific Linux namespace and security boundaries. Use containers for most of your fast networking and service tests. Fall back to QEMU VMs when you hit these constraints:

  • You cannot test kernel-specific changes (e. g. kernel modules) in container tests.
  • Containers cannot run SUID binaries. (These are more and more frowned upon and might vanish in the future anyway)
  • Container tests do not support graphical applications out of the box (and taking screenshots of them).
  • Containers running in the Nix sandbox don’t allow many of the systemd hardening options (ProtectSystem=, etc.) used by many NixOS modules.
  • Containers running in the sandbox have limited access to /dev, making it necessary to pass in needed paths, e.g., --option sandbox-paths /dev/net for VPN tests that create /dev/net/tun.

Summary

The introduction of the systemd-nspawn container backend in the test driver brings a new level of speed and flexibility to the NixOS integration test driver. By avoiding the overhead of full virtualization, test suites run significantly faster, consume fewer CI resources, and can effortlessly access host hardware like GPUs for advanced testing scenarios. In the next post, we will demonstrate how to run CUDA workloads in the test driver.

Special thanks go to Nixcademy trainer @jfly and Applicative Systems devops consultant @kmein for implementing and upstreaming the new container backend, community member @Ma27 for reviewing our pull requests and pointing out further optimization potential, and the Clan project for their earlier version.

Beyond the technical achievements, this project highlights a fantastic synergy: We are incredibly proud of how well specific industry requirements can align with and enrich the open-source ecosystem. At Applicative Systems/Nixcademy, we love being the bridge that connects these two worlds, turning practical business challenges into robust upstream solutions that the entire community can rely on.

Jacek Galowicz

About Jacek Galowicz

Jacek is the founder of the Nixcademy and interested in functional programming, controlling complexity, and spread Nix and NixOS all over the world. He also wrote a book about C++ and gave university lectures about software quality.

Nixcademy on Twitter/X

Follow us on X

Nixcademy on LinkedIn

Follow us on LinkedIn

Get Online Nix(OS) Classes

Nixcademy online classes

Are you looking for...

  • a guided learning experience?
  • motivating group exercises?
  • a personal certificate?
Nixcademy Certificate

The Nixcademy Newsletter: Receive Monthly Nix(OS) Blog Digests

Nixcademy Newsletter

Receive a monthly digest of blog articles and podcasts with useful summaries to stay connected with the world of Nix and NixOS!

Stay updated on our latest classes, services, and exclusive discounts for newsletter subscribers!

Subscribe Now