!oNSIfazDqEcwhcOjSL:matrix.org

disko

357 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko91 Servers

Load older messages


SenderMessageTime
8 Jun 2025
@pink_rocky:tchncs.derocky ((λ💝.💝)💐) (she/they; ask before DM please) No, I’m using the command I saw in the README:
sudo nix --experimental-features "nix-command flakes" run github:nix-community/disko/latest -- --mode destroy,format,mount ./etc/nixos/disk-config.nix, which I run inside /mnt with my configurations
00:26:16
@pink_rocky:tchncs.derocky ((λ💝.💝)💐) (she/they; ask before DM please) I’ll work on that pastebin for you. 01:37:59
@musicmatze:beyermatthi.asmusicmatze

Hello again.
If I use disko to set up my ZFS datasets, but do not mount them during boot (because they are encrypted and the machine must boot without attendence), how can I get the path where ZFS would mount the dataset if I manually zfs mount -a (for example)?

Right now all I can see is (in nix repl): outputs.nixosConfigurations.myHost.config.disko.devices.zpool.zroot.datasets."myDataSet"._name - which does look like something internal I should not base my configuration on, should I?
Because the dataset is not marked for automatic mounting, the attribute outputs.nixosConfigurations.myHost.config.disko.devices.zpool.zroot.datasets."myDataSet".mountpoint is null.

08:55:41
@phaer:matrix.orgphaerIf you don't set a static mountpoint in your disko config, I don't think it's possible for the general case during eval time. I think if you leave it empty the default should just be "/${dataset.name}"? During run time you could just do "zfs get mountpoint dataset"12:31:07
@musicmatze:beyermatthi.asmusicmatze So from what I found during research is that I should use options.mountpoint = "legacy"; and then declare my fileSystems with the filesystems that should be mounted automatically, and for others I must find a better way. Like hard coding stuff maybe 13:07:10
@musicmatze:beyermatthi.asmusicmatzeSo... Can I just write my own systemd mount unit for datasets I do not want to mount automatically? And if yes, how can I tell systemd that the dataset encryption password has to be retrieved from the user? I suppose there's ways... Because with a unit, I can make other units depend on it (like for starting navidrome only if the dataset with the music is mounted)13:13:34
9 Jun 2025
@austinbutler:matrix.orgAustin joined the room.03:28:26
@schuelermine:matrix.orgschuelermine joined the room.20:13:58
@schuelermine:matrix.orgschuelermineIs there a way to configure a swap file using Disko?20:14:09
@schuelermine:matrix.orgschuelerminespecifically, a swap file on an ext4 in LUKS partition20:14:21
@disco_stick:matrix.orgfood style edible product changed their display name from Take the I-Whatever to Desolation to SS Bullshit Dreams.20:43:55
@spaenny:tchncs.deSpaenny changed their display name from Spaenny to Philipp.20:46:24
10 Jun 2025
@caraiiwala:beeper.comcaraiiwala joined the room.00:26:37
@caraiiwala:beeper.comcaraiiwala

I'm new to ZFS and trying to convert my RAID setup to Disko. The deployment with nixos-anywhere was successful, but once rebooted, the system failed to boot. Here is my disko configuration:

      {
        disko.devices = {
          disk = {
            root = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions = {
                ESP = {
                  type = "EF00";
                  size = "64M";
                  content = {
                    type = "filesystem";
                    format = "vfat";
                    mountpoint = "/boot";
                  };
                };
                zfs = {
                  size = "100%";
                  content.type = "zfs";
                  content.pool = "root";
                };
              };
            };
            raid-1 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
            raid-2 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
            raid-3 = {
              type = "disk";
              device = "/dev/disk/by-id/...";
              content.type = "gpt";
              content.partitions.zfs = {
                size = "100%";
                content.type = "zfs";
                content.pool = "raid";
              };
            };
          };
          zpool = {
            root = {
              type = "zpool";
              rootFsOptions.mountpoint = "none";
              datasets = {
                root.type = "zfs_fs";
                root.mountpoint = "/";
                home.type = "zfs_fs";
                home.mountpoint = "/home";
              };
            };
            raid = {
              type = "zpool";
              rootFsOptions.mountpoint = "none";
              mode.topology.type = "topology";
              mode.topology.vdev = [
                {
                  mode = "raidz1";
                  members = ["raid-1" "raid-2" "raid-3"];
                }
              ];
              datasets.raid = {
                type = "zfs_fs";
                mountpoint = "/mnt/raid";
              };
            };
          };
        };
      }
00:45:00
@s:consortium.chatshelvacu you need to either use dataset option mountpoint=legacy and configure mountpoints with usual fstab (which i recommend for root fs, "legacy" is a misnomer) or tell zfs to run an import in initrd 01:20:28
@s:consortium.chatshelvacuhttps://search.nixos.org/options?channel=25.05&show=boot.zfs.extraPools&from=0&size=50&sort=relevance&type=packages&query=boot+zfs01:21:45
@caraiiwala:beeper.comcaraiiwala

Thanks for responding. I actually tried booting it with an even simpler non-ZFS config and I'm having the same issue:

          disko.devices = {
            disk = {
              root = {
                type = "disk";
                device = "/dev/disk/by-id/scsi-36b82a720cf60ce002fd94d2e2991b17e";
                content.type = "gpt";
                content.partitions = {
                  ESP = {
                    type = "EF00";
                    size = "64M";
                    content = {
                      type = "filesystem";
                      format = "vfat";
                      mountpoint = "/boot";
                      mountOptions = ["umask=077"];
                    };
                  };
                  root = {
                    size = "100%";
                    content.type = "filesystem";
                    content.format = "ext4";
                    content.mountpoint = "/";
                  };
                };
              };
            };
          };
01:28:56
@caraiiwala:beeper.comcaraiiwala So now I'm very confused 01:29:13
@caraiiwala:beeper.comcaraiiwala Well I've just tried an identical simple config on another system and it worked fine 02:03:10
@s:consortium.chatshelvacuan easy check is to look at /etc/fstab in the final system and make sure it looks right02:03:16
@s:consortium.chatshelvacu
In reply to @caraiiwala:beeper.com
Well I've just tried an identical simple config on another system and it worked fine
ah, then likely not a disko issue
02:03:32
@caraiiwala:beeper.comcaraiiwala So the main difference between these two systems is that the one this worked on has a hardware RAID controller that supports JBOD. The problematic one doesn't and so in an effort to try out ZFS I experimented with RAID0 passthrough. 02:06:44
@caraiiwala:beeper.comcaraiiwalaGoing to revert the RAID config and see if that fixes it02:07:28
@matthewcroughan:defenestrate.itmatthewcroughan Morgan (@numinit): 10:17:38
@matthewcroughan:defenestrate.itmatthewcroughanI stumbled upon https://github.com/util-linux/util-linux/issues/3495#issuecomment-2954264763 and I'm also effected10:17:48
@matthewcroughan:defenestrate.itmatthewcroughan lassulus: Mic92 https://github.com/util-linux/util-linux/issues/3613 11:53:00
@matthewcroughan:defenestrate.itmatthewcroughanthis details a fix in case other people experience the same issues11:53:14
@matthewcroughan:defenestrate.itmatthewcroughanThe TL;DR is that your wiping process will not work with zfs disks before util-linux 2.40 11:53:52
@matthewcroughan:defenestrate.itmatthewcroughanSo disko, and anything else that uses libblkid will be leaving zfs signatures behind on disks, which causes them to now be unmountable if using partlabel 11:54:16
@matthewcroughan:defenestrate.itmatthewcroughan* So disko, and anything else that uses libblkid will be leaving zfs signatures behind on disks, which causes them to now be unmountable if using partlabel, which disko does11:54:20

Show newer messages


Back to Room ListRoom Version: 10