!oNSIfazDqEcwhcOjSL:matrix.org

disko

356 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko89 Servers

Load older messages


SenderMessageTime
8 Jun 2025
@phaer:matrix.orgphaerIf you don't set a static mountpoint in your disko config, I don't think it's possible for the general case during eval time. I think if you leave it empty the default should just be "/${dataset.name}"? During run time you could just do "zfs get mountpoint dataset"12:31:07
@musicmatze:beyermatthi.asmusicmatze So from what I found during research is that I should use options.mountpoint = "legacy"; and then declare my fileSystems with the filesystems that should be mounted automatically, and for others I must find a better way. Like hard coding stuff maybe 13:07:10
@musicmatze:beyermatthi.asmusicmatzeSo... Can I just write my own systemd mount unit for datasets I do not want to mount automatically? And if yes, how can I tell systemd that the dataset encryption password has to be retrieved from the user? I suppose there's ways... Because with a unit, I can make other units depend on it (like for starting navidrome only if the dataset with the music is mounted)13:13:34
9 Jun 2025
@austinbutler:matrix.orgAustin joined the room.03:28:26
@schuelermine:matrix.orgschuelermine joined the room.20:13:58
@schuelermine:matrix.orgschuelermineIs there a way to configure a swap file using Disko?20:14:09
@schuelermine:matrix.orgschuelerminespecifically, a swap file on an ext4 in LUKS partition20:14:21
@disco_stick:matrix.orgfood style edible product changed their display name from Take the I-Whatever to Desolation to SS Bullshit Dreams.20:43:55
@spaenny:tchncs.deSpaenny changed their display name from Spaenny to Philipp.20:46:24
10 Jun 2025
@caraiiwala:beeper.comcaraiiwala joined the room.00:26:37
@caraiiwala:beeper.comcaraiiwala

I'm new to ZFS and trying to convert my RAID setup to Disko. The deployment with nixos-anywhere was successful, but once rebooted, the system failed to boot. Here is my disko configuration:

      {
        disko.devices = {
          disk = {
            root = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions = {
                ESP = {
                  type = "EF00";
                  size = "64M";
                  content = {
                    type = "filesystem";
                    format = "vfat";
                    mountpoint = "/boot";
                  };
                };
                zfs = {
                  size = "100%";
                  content.type = "zfs";
                  content.pool = "root";
                };
              };
            };
            raid-1 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
            raid-2 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
            raid-3 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
          };
          zpool = {
            root = {
              type = "zpool";
              rootFsOptions.mountpoint = "none";
              datasets = {
                root.type = "zfs_fs";
                root.mountpoint = "/";
                home.type = "zfs_fs";
                home.mountpoint = "/home";
              };
            };
            raid = {
              type = "zpool";
              rootFsOptions.mountpoint = "none";
              mode.topology.type = "topology";
              mode.topology.vdev = [
                {
                  mode = "raidz1";
                  members = ["raid-1" "raid-2" "raid-3"];
                }
              ];
              datasets.raid = {
                type = "zfs_fs";
                mountpoint = "/mnt/raid";
              };
            };
          };
        };
      }
00:45:00
@s:consortium.chatshelvacu you need to either use dataset option mountpoint=legacy and configure mountpoints with usual fstab (which i recommend for root fs, "legacy" is a misnomer) or tell zfs to run an import in initrd 01:20:28
@s:consortium.chatshelvacuhttps://search.nixos.org/options?channel=25.05&show=boot.zfs.extraPools&from=0&size=50&sort=relevance&type=packages&query=boot+zfs01:21:45
@caraiiwala:beeper.comcaraiiwala

Thanks for responding. I actually tried booting it with an even simpler non-ZFS config and I'm having the same issue:

          disko.devices = {
            disk = {
              root = {
                type = "disk";
                device = "/dev/disk/by-id/scsi-36b82a720cf60ce002fd94d2e2991b17e";
                content.type = "gpt";
                content.partitions = {
                  ESP = {
                    type = "EF00";
                    size = "64M";
                    content = {
                      type = "filesystem";
                      format = "vfat";
                      mountpoint = "/boot";
                      mountOptions = ["umask=077"];
                    };
                  };
                  root = {
                    size = "100%";
                    content.type = "filesystem";
                    content.format = "ext4";
                    content.mountpoint = "/";
                  };
                };
              };
            };
          };
01:28:56
@caraiiwala:beeper.comcaraiiwala So now I'm very confused 01:29:13
@caraiiwala:beeper.comcaraiiwala Well I've just tried an identical simple config on another system and it worked fine 02:03:10
@s:consortium.chatshelvacuan easy check is to look at /etc/fstab in the final system and make sure it looks right02:03:16
@s:consortium.chatshelvacu
In reply to @caraiiwala:beeper.com
Well I've just tried an identical simple config on another system and it worked fine
ah, then likely not a disko issue
02:03:32
@caraiiwala:beeper.comcaraiiwala So the main difference between these two systems is that the one this worked on has a hardware RAID controller that supports JBOD. The problematic one doesn't and so in an effort to try out ZFS I experimented with RAID0 passthrough. 02:06:44
@caraiiwala:beeper.comcaraiiwalaGoing to revert the RAID config and see if that fixes it02:07:28
@matthewcroughan:defenestrate.itmatthewcroughan Morgan (@numinit): 10:17:38
@matthewcroughan:defenestrate.itmatthewcroughanI stumbled upon https://github.com/util-linux/util-linux/issues/3495#issuecomment-2954264763 and I'm also effected10:17:48
@matthewcroughan:defenestrate.itmatthewcroughan lassulus: Mic92 https://github.com/util-linux/util-linux/issues/3613 11:53:00
@matthewcroughan:defenestrate.itmatthewcroughanthis details a fix in case other people experience the same issues11:53:14
@matthewcroughan:defenestrate.itmatthewcroughanThe TL;DR is that your wiping process will not work with zfs disks before util-linux 2.40 11:53:52
@matthewcroughan:defenestrate.itmatthewcroughanSo disko, and anything else that uses libblkid will be leaving zfs signatures behind on disks, which causes them to now be unmountable if using partlabel 11:54:16
@matthewcroughan:defenestrate.itmatthewcroughan* So disko, and anything else that uses libblkid will be leaving zfs signatures behind on disks, which causes them to now be unmountable if using partlabel, which disko does11:54:20
@matthewcroughan:defenestrate.itmatthewcroughanwipefs/liblblkid from 2.39.4 doesn't deal with zfs signatures properly, but newer versions do, causing them to be visible, which confuses udev and systemd populating /dev/disk/by-partlabel 11:55:12
@matthewcroughan:defenestrate.itmatthewcroughanI doubt they'll fix it, it's pretty horrendous though11:55:28
@matthewcroughan:defenestrate.itmatthewcroughan

For me the experience was as follows:

  1. Use disko (util-linux <2.39.4) months ago to make a zfs root disk
  2. Use disko (util-linux <2.39.4) later to make a bcachefs root disk
  3. Everything is fine
  4. Upgrade to NixOS 25.05
  5. Timeouts on boot mounting because /dev/disk/by-partlabel for the bcachefs disk doesn't exist on NixOS 25.05, but does on 24.11

The reason turned out to be the above.

11:57:56

Show newer messages


Back to Room ListRoom Version: 10