!oNSIfazDqEcwhcOjSL:matrix.org

disko

364 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko93 Servers

Load older messages


SenderMessageTime
11 Feb 2025
@projectinitiative:matrix.orgprojectinitiativeI will note I also tried with the zfs-with-vdevs example adding it to the test machine config, and it also only has one disk listed under dependencies despite having several listed in the example file under zroot14:24:54
@lassulus:lassul.uslassulushmm, maybe there is indeed a bug14:25:56
@lassulus:lassul.uslassuluswhich didn't get triggered yet, but I thought the complex example would take care of that14:26:17
@projectinitiative:matrix.orgprojectinitiative

reproduction steps:

      nixosConfigurations.testmachine = lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          ./tests/disko-install/configuration.nix
          # ./example/hybrid.nix
          ./module.nix
          # ./example/bcachefs-multi-disk.nix
          # ./example/mdadm.nix
          ./example/zfs-with-vdevs.nix
        ];

  • add different examples to load and play around with
  • load the flake into nix repl
  • look at some of the _meta.deviceDependencies
14:27:29
@projectinitiative:matrix.orgprojectinitiative *

reproduction steps:

      nixosConfigurations.testmachine = lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          ./tests/disko-install/configuration.nix
          # ./example/hybrid.nix
          ./module.nix
          # ./example/bcachefs-multi-disk.nix
          # ./example/mdadm.nix
          ./example/zfs-with-vdevs.nix
        ];

  • add different examples to load and play around with toggling various configs
  • load the flake into nix repl
  • look at some of the _meta.deviceDependenciese
14:27:59
@lassulus:lassul.uslassulusI will try to add that to my thaigersprint todo list tomorrow :D14:28:16
@projectinitiative:matrix.orgprojectinitiativeObligatory: I wish nix had better testing/debugging tools, this is taking way longer to nail down my overall issue haha...14:30:10
@shift:c-base.orgshift
In reply to @lassulus:lassul.us
I will try to add that to my thaigersprint todo list tomorrow :D
You want issues for your sprint? Haha
20:09:11
12 Feb 2025
@lassulus:lassul.uslassulusYou have more? :D01:38:19
@projectinitiative:matrix.orgprojectinitiative
In reply to @lassulus:lassul.us
I will try to add that to my thaigersprint todo list tomorrow :D
I looked up that thaigersprint was, that seems unique. Talk about a cool experience
03:48:41
@projectinitiative:matrix.orgprojectinitiative* I looked up what thaigersprint was, that seems unique. Talk about a cool experience03:48:49
@lassulus:lassul.uslassulusok, deepMergeMap actually doesn't merge lists, since recursiveUpdate doesn't take care of that05:31:21
@lassulus:lassul.uslassulus projectinitiative: https://github.com/nix-community/disko/pull/963 10:21:50
@projectinitiative:matrix.orgprojectinitiativelooks like this fixed the deviceDendencies list not showing up! I now just have to figure out why my type ordering is not working correctly.14:49:36
@lassulus:lassul.uslassulusIf you have a PR with the current state I could take a look tomorrowish15:17:11
@zvzg:matrix.orgSávioThanks for the comments on the binfmt PR. I'm still working on it17:27:57
@projectinitiative:matrix.orgprojectinitiativeCurrent state of my attempt: https://github.com/nix-community/disko/pull/96118:42:48
13 Feb 2025
@projectinitiative:matrix.orgprojectinitiative lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types at alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list 22:30:01
@projectinitiative:matrix.orgprojectinitiative
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted                                                                                   [                                                                                                       [                                                                                                       "bcachefs"                                                                                            "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:31:20
@projectinitiative:matrix.orgprojectinitiative *
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted[                                                                                                       [                                                                                                       "bcachefs"                                                                                            "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:32:43
@projectinitiative:matrix.orgprojectinitiative *
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted
[ 
  [ 
    "bcachefs" 
    "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:33:28
@projectinitiative:matrix.orgprojectinitiative * lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types that are alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list 22:34:14
@projectinitiative:matrix.orgprojectinitiativeMade a PR with the function changes that fixed my other problem: https://github.com/nix-community/disko/pull/96822:45:49
14 Feb 2025
@lassulus:lassul.uslassulushmm, weird that this bug appears, I thought the bcachefs devices would depend on the disks and would be created afterwards automatically by the sorting03:56:01
@lassulus:lassul.uslassulusI guess I have to take a deeper look again after my hangover passed03:56:11
@projectinitiative:matrix.orgprojectinitiative

Another weird oddity: I am not sure that the GPT formatter is respecting the device paths. When running the test VM against the mdadm config, I would have assumed it would fail since the example file doesn't have a valid device path. (/dev/my-drive) Checking the logs, it just looks like it picks the next disk in the sequence:

machine # [   62.117159]  vdb: vdb1 vdb2
machine # [   62.244857] systemd[1]: Finished Virtual Console Setup.
machine # [   63.165182]  vdb: vdb1 vdb2
machine # + partprobe /dev/vdb
machine # + udevadm trigger --subsystem-match=block
machine # + udevadm settle
machine # + device=/dev/disk/by-partlabel/disk-disk1-mdadm
machine # + name=raid1
machine # + type=mdraid
machine # + echo /dev/disk/by-partlabel/disk-disk1-mdadm
machine # + cat /tmp/tmp.Z9w0tCq6Ai/raid_raid1
machine # + device=/dev/vdc
machine # + imageName=disk2
machine # + imageSize=2G
machine # + name=disk2
machine # + type=disk
machine # + device=/dev/vdc
machine # + efiGptPartitionFirst=1
machine # + type=gpt
machine # + blkid /dev/vdc
machine # /dev/vdc: PTUUID="c27c8535-3a4b-4d3d-99a6-bd5b541b52a1" PTTYPE="gpt"
{
  disko.devices = {
    disk = {
      disk1 = {
        type = "disk";
        device = "/dev/my-disk";
        content = {
          type = "gpt";
          partitions = {
            boot = {
              size = "1M";
              type = "EF02"; # for grub MBR
            };
            mdadm = {
              size = "100%";
              content = {
                type = "mdraid";
                name = "raid1";
              };
            };
          };
        };
      };
      disk2 = {
        type = "disk";
        device = "/dev/my-disk2";
        content = {
          type = "gpt";
          partitions = {
            boot = {
              size = "1M";
              type = "EF02"; # for grub MBR
            };
            mdadm = {
              size = "100%";
              content = {
                type = "mdraid";
                name = "raid1";
              };
            };
          };
        };
      };
    };
    mdadm = {
      raid1 = {
        type = "mdadm";
        level = 1;
        content = {
          type = "gpt";
          partitions = {
            primary = {
              size = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
    };
  };
}

I tested this by changing the path to /dev/disk/by-path/virtio-pci-0000:00:0a.0 which maps to
lrwxrwxrwx 1 root root 9 Feb 14 05:33 virtio-pci-0000:00:0a.0 -> ../../vdc
yet it completely ignores that. Maybe I am not understanding this correctly O.o

05:49:13
@projectinitiative:matrix.orgprojectinitiative Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding the test. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing 16:03:30
@projectinitiative:matrix.orgprojectinitiative * Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding device property for tests. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing 16:03:56
@lassulus:lassul.uslassulusBecause we want to test real disko configs in VMS and not just configs crafted for VMS16:18:29
@projectinitiative:matrix.orgprojectinitiativeright, but if the config is mutated to fit a VM, isn't that essentially the same thing? 16:19:26

Show newer messages


Back to Room ListRoom Version: 10