!oNSIfazDqEcwhcOjSL:matrix.org

disko

361 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko93 Servers

Load older messages


SenderMessageTime
2 Jul 2025
@steeringwheelrules:tchncs.de@steeringwheelrules:tchncs.de joined the room.11:38:42
3 Jul 2025
@brisingr05:matrix.orgBrisingr joined the room.12:35:47
4 Jul 2025
@dramosac:matrix.orgDaniel Ramos joined the room.19:30:48
5 Jul 2025
@dramosac:matrix.orgDaniel Ramos

Hi Nix and Disko community,

I'm new to the Nix ecosystem, and I have a lot of questions, so I really appreciate your patience while I learn 😊.

I'm trying to set up a home NAS server and was planning to use Disko to manage the disks. My setup is simple:

  • Use an SSD for the operating system (ESP, swap, and root).
  • Use two 10TB HDDs to create a Btrfs RAID1 pool for long-term storage.

My concern is whether Disko is designed for this type of scenario. From what I understand in the documentation, Disko can format and prepare the two HDDs with Btrfs during the initial installation, but if I later add a third 10TB disk, I cannot use Disko to add it to the existing pool and rebalance declaratively, and I would need to do this manually using btrfs device add and btrfs balance.

Is this understanding correct?

Is there any tool that allows fully declarative management and expansion of Btrfs arrays, or is Disko primarily intended for the installation and reprovisioning phase rather than for post-installation expansion?

Thanks in advance!

07:42:39
6 Jul 2025
@s:consortium.chatshelvacu

I cannot use Disko to add it to the existing pool and rebalance declaratively

I believe that's correct. I don't know of any tool that can do that, maybe you should make one!

06:54:45
@magic_rb:matrix.redalder.orgmagic_rbYeah disko cant do that sadly, such a tool would be extremely difficult to sell people on imo. One mistake and your whole pool goes poof10:36:10
@sigmasquadron:matrix.orgSigmaSquadroni would appreciate an append-only tool that synced a disko configuration with the current partitioning layout. If you added a new disk to a pool, it'd handle the necessary setup for you, but if you removed a disk from the config, it wouldn't do anything.10:43:49
@lassulus:lassul.uslassulusin theory format does that, you can run it multiple times and it will do some of the actions (like adding a new disk if there is an empty new disk) or adding a new dataset to a zfs pool or adding an lv to an lvm vg11:01:45
@lassulus:lassul.uslassulusalthough that is a bit experimental and I didn't have the time to setup a full test suite yet11:02:04
@sigmasquadron:matrix.orgSigmaSquadron running disko format on a working system is not something that screams "this is safe, don't worry wink wink" to anyone. 11:03:48
@lassulus:lassul.uslassulusit shouldn't :D (yet)11:05:16
@lassulus:lassul.uslassulusbut if someone has time and motivation to work on it, I can give guidance, or just wait until I'm done with the other things on my questlog :)11:06:09
@musicmatze:beyermatthi.asmusicmatzeI have the same issue with ZFS and datasets and so on. What I do is, use Disko to generate the shell scripts to create new datasets and then manually apply them. If I would add another disk, I would do the same steps.14:21:40
@musicmatze:beyermatthi.asmusicmatze So far that worked nicely 14:21:47
@reflux1291:catgirl.cloudDerpy (they/any) changed their display name from Any (they/any) to Derpy (they/any).17:00:19
7 Jul 2025
@julien:ligi.frZempashi joined the room.09:49:56
@julien:ligi.frZempashi changed their display name from Julien Girardin to Zempashi.11:07:13
@vladexa:matrix.orgVladislav Grechannik joined the room.17:28:14
@vladexa:matrix.orgVladislav Grechannik Does anybody run multiple bcachefs subvolumes with encryption? It bothers me that it asks for a password each time on boot (in my case I have to enter my password 3 times for each subvolume) 17:31:33
@steeringwheelrules:tchncs.de@steeringwheelrules:tchncs.de are there any examples of zfs setup that works with hibernation? as far as I'm aware it only needs to not be used for the swap partition, but the rest of the drives (including nixos) can be zfs, right? 22:31:26
@magic_rb:matrix.redalder.orgmagic_rbNo, zfs shouldnt be used for hibernation period22:33:15
8 Jul 2025
@s:consortium.chatshelvacu
In reply to @steeringwheelrules:tchncs.de
are there any examples of zfs setup that works with hibernation? as far as I'm aware it only needs to not be used for the swap partition, but the rest of the drives (including nixos) can be zfs, right?
my understanding is that for hibernation, the hiberfile is loaded in initrd. If the zfs pool is also loaded/mounted for any reason during that initrd, then after the memory gets restored zfs will have inconsistent information about the pool. I wouldn't risk it
02:43:17
@magic_rb:matrix.redalder.orgmagic_rbYeah zfs has no safeguards against this case. So theoretically if you avoid the scenario above it may be safe. But even then its a very untested codepath, though to be honest so is suspend and that seems to be working completely fine06:02:53
@julien:ligi.frZempashi removed their profile picture.15:54:47
9 Jul 2025
@jonhermansen:matrix.orgjonhermansen joined the room.01:17:27
@znaniye:matrix.org@znaniye:matrix.org joined the room.14:04:06
10 Jul 2025
@bbigras:matrix.orgbbigras joined the room.18:20:56
@0x4a6f:matrix.org[0x4A6F]

This might work for hibernation:

  disko.devices = {
    disk = {
      nvme0n1 = {
        device = "/dev/nvme0n1";
        type = "disk";
        content = {
          type = "gpt";
          partitions = {
            boot = {
              priority = 1;
              start = "34s";
              end = "2047s";
              type = "EF02"; # for grub MBR
            };
            ESP = {
              priority = 2;
              start = "2048s";
              end = "2099199s";
              type = "EF00";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            };
            luks = {
              priority = 3;
              start = "2099200s";
              end = "100%";
              type = "8309";
              content = {
                type = "luks";
                name = "crypted-nvme0n1";
                settings = {
                  allowDiscards = true;
                };
                content = {
                  type = "lvm_pv";
                  vg = "nvme0n1";
                };
              };
            };
          };
        };
      };
    };
    lvm_vg = {
      nvme0n1 = {
        type = "lvm_vg";
        lvs = {
          # LVs are created in alphabetic order for some reason
          # Prefixing them with a letter like this ensures desired creation order
          a_swap = {
            size = "64GiB"; # equal size to maximum available RAM
            content = {
              type = "swap";
            };
          };
          z_data = {
            size = "100%FREE";
            content = {
              type = "zfs";
              pool = "root";
            };
          };
        };
      };
    };
    zpool = {
      root = {
        type = "zpool";
        rootFsOptions = {
          compression = "lz4";
          "com.sun:auto-snapshot" = "false";
          mountpoint = "none";
          reservation = "16G";
        };
        postCreateHook = ''
          zfs snapshot root@blank;
          zfs snapshot root/nixos@blank;
          zfs snapshot root/nix@blank;
          zfs snapshot root/home@blank;
          zfs snapshot root/persist@blank
        '';

        datasets = {
          nixos = {
            type = "zfs_fs";
            mountpoint = "/";
            options.mountpoint = "legacy";
          };
          nix = {
            type = "zfs_fs";
            mountpoint = "/nix";
            options.mountpoint = "legacy";
          };
          home = {
            type = "zfs_fs";
            mountpoint = "/home";
            options.mountpoint = "legacy";
            options."com.sun:auto-snapshot" = "true";
          };
          persist = {
            type = "zfs_fs";
            mountpoint = "/persist";
            options.mountpoint = "legacy";
            options."com.sun:auto-snapshot" = "true";
          };
        };
      };
    };
  };

But be aware of zfs#12842 - make Linux hibernation (suspend-to-disk) more robust and nixpkgs#208037 - nixos/zfs: mitigate data loss issues when resuming from hibernate

You might also need these to enter the danger territory (make yourself familiar with zpool recovery strategy):

  boot.zfs.allowHibernation = true;
  boot.zfs.forceImportRoot = false;
  boot.zfs.forceImportAll = false;
21:17:01
@steeringwheelrules:tchncs.de@steeringwheelrules:tchncs.dethank you! I will try and play with this23:28:11
11 Jul 2025
@cathal_mullan:matrix.orgCathal joined the room.19:53:53

Show newer messages


Back to Room ListRoom Version: 10