Immutable A/B system partitions with NixOS for over-the-air updates

Immutable A/B system partitions with NixOS for over-the-air updates
šŸ“† Mon Nov 03 2025 by Jacek Galowicz
(11 min. reading time)

This is the third part of our four-part article series in which we have a closer look at how easy it is to create GNU/Linux systems with NixOS that are:

In the different parts of the article series, we handle the following topics step by step:

  1. NixOS appliance images with systemd-repart
  2. Minimizing NixOS images
  3. Immutable A/B system partitions with NixOS for over-the-air updates (šŸ‘ˆ this article)
  4. Cross-compiling the image for other platforms (TBD)

Self-updating NixOS appliances with systemd-sysupdate

After we created a small appliance image with a desktop in the last article, we will now transform it into a self-updating system using systemd-sysupdate. This happens in basically two steps, which we also described in the NixCon 2025 lightning talk about NixOS appliances with OTA updates via systemd-repart and systemd-sysupdate:

Title Slide of the NixCon 2025 lightning talk about NixOS appliances with systemd-repart and systemd-sysupdate

Everything evolves around a changed partition layout. In part 1 of this blog series, we built a system that consists only of a boot partition, a read-only Nix store partition, and the user data partitions that automatically fill up the rest of the available disk space:

Partitions before and after the first boot in our article series part 1

In this part 3 of the series, we extend the partitioning layout as shown in the lightning talk slide:

The change of the partitioning scheme between first boot and updates

Let’s go through the changes step by step. The full code is in this repository: https://github.com/applicative-systems/nixos-appliance-ota-update

Preparing systemd-repart for the new partitioning scheme

Let’s look at the image.repart attribute set excerpt of the full image.nix configuration file that describes all new partitions. The inline comments highlight the changes compared to the first article in the series.

image.repart =
  let
    inherit (pkgs.stdenv.hostPlatform) efiArch;
    size = "2G";
  in
  {
    name = config.system.image.id;
    split = true; # New, explained later

    partitions = {
      esp = {
        contents = {
          "/EFI/BOOT/BOOT${lib.toUpper efiArch}.EFI".source =
            "${pkgs.systemd}/lib/systemd/boot/efi/systemd-boot${efiArch}.efi";

          "/EFI/Linux/${config.system.boot.loader.ukiFile}".source =
            "${config.system.build.uki}/${config.system.boot.loader.ukiFile}";

          # New: Not necessary, but w/o timeout there would be no select screen
          "/loader/loader.conf".source = builtins.toFile "loader.conf" ''
            timeout 20
          '';
        };
        repartConfig = {
          Type = "esp";
          Label = "boot";
          Format = "vfat";
          SizeMinBytes = "200M";
          SplitName = "-";
        };
      };
      nix-store = {
        storePaths = [ config.system.build.toplevel ];
        stripNixStorePrefix = true;
        repartConfig = {
          Type = "linux-generic";
          # New: Image version in partition label
          Label = "nix-store_${config.system.image.version}";
          # New: Bigger than necessary to prepare for
          #      growth in future updates
          Minimize = "off";
          SizeMinBytes = size;
          SizeMaxBytes = size;
          Format = "squashfs";
          ReadOnly = "yes";
          # New: Split name, Explained later.
          SplitName = "nix-store";
        };
      };

      # Prepared empty space for system partition B
      empty.repartConfig = {
        Type = "linux-generic";
        Label = "_empty";
        Minimize = "off";
        SizeMinBytes = size;
        SizeMaxBytes = size;
        SplitName = "-";
      };

      # Not self-growing this time, for simplicity reasons
      # See part 1 to re-enable
      root.repartConfig = {
        Type = "root";
        Format = "ext4";
        Label = "root";
        Minimize = "off";

        SizeMinBytes = "5G";
        SizeMaxBytes = "5G";
        SplitName = "-";
      };
    };
  };

There are significant changes compared to the partition description in part 1:

What does image splitting mean? Let’s have a look at the output of the image build target in our project:

$ nix build .#image-v1

$ du -sh result/*
372M    result/appliance_1.nix-store.raw
421M    result/appliance_1.raw
4.0K    result/repart-output.json

The appliance_1.raw image file contains the full image and the appliance_1.nix-store.raw file only contains the Nix store part. For updates, we would only provide the smaller Nix store part and the kernel image files!

To bootstrap a system, we would use the full appliance_1.raw image. It can be booted from as it is.

Preparing update packages

For updates, we need the appliance_1.nix-store.raw file together with the new kernel image. Then, we instruct systemd-sydupdate to follow a 2-step procedure for each update:

  1. Fill the empty space of the B-partition (or reuse the A/B partition of an old version) with the content of the newer version of an appliance_2.nix-store.raw.
  2. Copy the latest UKI kernel file into the ESP partition (and remove old kernels).
  3. Reboot.

The new Nix store partition and UKI kernel file will be served via HTTP(S) later. To build these images as part of the system description, we simply add a new NixOS module called update-package.nix to our system description:

{ config, pkgs, ...  }:

let
  inherit (config.system) build;
  inherit (config.system.image) version id;
in

{
  config.system.build.sysupdate-package =
    pkgs.runCommand "sysupdate-package-${config.system.image.version}" { }
      ''
        mkdir $out
        cp ${build.uki}/${config.system.boot.loader.ukiFile} $out/
        cp ${build.image}/${id}_${version}.nix-store.raw $out/
        cd $out
        sha256sum * > SHA256SUMS
      '';
}

This is an output target that we can now build from our flake like this:

$ ls  $(nix build .#nixosConfigurations.appliance.config.system.build.sysupdate-package --print-out-paths)
appliance_1.efi  appliance_1.nix-store.raw  SHA256SUMS

$ cat result/SHA256SUMS
8a8729db693e57ade3aca923348114286b1bc76288f9034da598f8d6b388afd2  appliance_1.efi
dc3a23d1a5f56b258606718dfbe27b23a2fb5cced909b25bc2423588cd8cc14d  appliance_1.nix-store.raw

The SHA256SUMS file is necessary because systemd-sysupdate looks for this file on the update server. We can also see that the _1 suffix in the file name carries the image version. This is also part of the interface that systemd-sysupdate expects, as this way we can host multiple versions in the same folder and have the update daemon determine itself if there are updates newer than the currently running system.

Configuring the update mechanism

Now that we have a bootable system image and a comfortable way to create the Nix store partition and UKI image for the next system updates, we need to educate our system on what updates look like.

For this purpose, we add a new NixOS module update.nix:

{ pkgs, config, ...  }:
{
  systemd.sysupdate = {
    enable = true;

    transfers =
      let
        commonSource = {
          # For demo purposes, the VM will serve its own updates.
          # In reality, this would be "https://updates.company.org"
          Path = "http://localhost/";
          Type = "url-file";
        };
      in
      {
        "10-nix-store" = {
          Source = commonSource // {
            # The @v pattern matches on the version numbers
            MatchPattern = [ "${config.system.image.id}_@v.nix-store.raw" ];
          };

          Target = {
            # We only have an A and a B partition
            InstancesMax = 2;

            Path = "auto";
            # detect and create partitions with this versioned label
            MatchPattern = "nix-store_@v";
            Type = "partition";
            ReadOnly = "yes";
          };

          # This is about signature verification with GPG.
          # Disabled in this example.
          Transfer.Verify = "no";
        };

        "20-boot-image" = {
          Source = commonSource // {
            MatchPattern = [ "${config.boot.uki.name}_@v.efi" ];
          };
          Target = {
            # only keep 2 kernel images in the ESP partition
            InstancesMax = 2;
            MatchPattern = [ "${config.boot.uki.name}_@v.efi" ];

            Mode = "0444";
            # new kernels will be added to this folder,
            # old kernels removed
            Path = "/EFI/Linux";
            PathRelativeTo = "boot";

            Type = "regular-file";
          };

          Transfer.Verify = "no";
        };
      };
  };

There are many very systemd-sysupdate specific options here which are all explained in more detail in the systemd-sysupdate.d documentation.

This is everything we need to describe to systemd-sysupdate:

Serving updates

Normally, we would set up a web server somewhere and serve the files from there.

To make this guide more standalone, we let the demo VM serve its own updates by adding this snippet to its configuration. In the example repository, this happens here:

services.lighttpd = {
  enable = true;
  document-root = inputs.self.packages.${system}.update-v2;
};

The update-v2 package is version 2 of our demo image. It contains a folder with the 2nd version of the Nix store image, the UKI file, and the SHA256SUMS file, so we can let the web server point right into it as its WWW root serving folder.

A production server would, of course, serve multiple versions at the same time. Serving more images would mean that we pile up more versions in the same folder and provide the right SHA256SUMS file for that.

Seeing updates in action

To test the update process, we can run our VM from the repository like this:

nix run

The VM will look like this after boot:

The home screen of our appliance demo system

To make the effect of updates more obvious, we encoded the system version in the background image (implemented here).

Now, we can have a look at the initial partition table first:

The appliance's initial partition table after first boot

We can see that the B partition is still empty. This way, we have space to write the first update’s content into this partition.

Normally, updates would happen in the background via the systemd-sysupdate service timer:

systemd-sysupdate service timer status

This timer will trigger daily (that is freely configurable, of course) and download the latest update in the background. Automatic updates would typically happen at night (also configurable, see options).

To try out and test the update procedure right now, we can kick off the update process using these commands:

# updatectl           # show current version
# updatectl check     # check if there are updates on the download server
# updatectl update    # Perform the update

The manual updatectl update output

The update takes a short moment, and after that, we can see how the partition table changed:

The partition table after the update

We can now reboot the machine and select version 1 or 2 of our appliance system:

The boot selection with 2 available system versions

After booting into version 2, we can even remove version 1 with the vacuum sub command:

After vacuuming the old update we get the empty space back

Please note that this is typically not done as the system version 3 would simply be written over version 1, so we don’t need to keep one system partition empty.

Summary and outlook

Great, our project is now able to:

And all this can be tested in a VM quickly with short iteration cycles before we put everything on real embedded hardware.

The final repository is here on GitHub: https://github.com/applicative-systems/nixos-appliance-ota-update/

What to do from here?

What to do from here with our NixOS appliance

In production, we would of course want cryptographically signed images, which systemd-sysupdate supports out of the box. We would also want to use hardware capabilities to enroll secure file system encryption at the first boot for all user partitions, which systemd also supports out of the box. To control file system integrity, we also want to use verity partitions - again, this is supported by systemd out of the box. The last thing that shall not be missing in production is an automatic boot assessment to have a robust failover mechanism in case an update does not work.

In the next and last part of this series, we are going to demonstrate how to cross-compile our appliance for different CPU architectures. The code is already available in the project, and we will provide a deep dive about how it works! Stay tuned!

We help many customers transfer from other Linux-based solutions to NixOS or improve their existing NixOS-based solutions. From that experience, we can help you copy the successful patterns of winning organizations and avoid the patterns that have not worked well elsewhere, instead of having to make this experience yourself from scratch. No matter if you just need a quick consultation on how to build something with Nix or if we can help you by lending developer time, schedule a quick call with us or e-mail us.

Jacek Galowicz

About Jacek Galowicz

Jacek is the founder of the Nixcademy and interested in functional programming, controlling complexity, and spread Nix and NixOS all over the world. He also wrote a book about C++ and gave university lectures about software quality.

Nixcademy on Twitter/X

Follow us on X

Nixcademy on LinkedIn

Follow us on LinkedIn

Get Online Nix(OS) Classes

Nixcademy online classes

Are you looking for...

  • a guided learning experience?
  • motivating group exercises?
  • a personal certificate?
Nixcademy Certificate

The Nixcademy Newsletter: Receive Monthly Nix(OS) Blog Digests

Nixcademy Newsletter

Receive a monthly digest of blog articles and podcasts with useful summaries to stay connected with the world of Nix and NixOS!

Stay updated on our latest classes, services, and exclusive discounts for newsletter subscribers!

Subscribe Now