!oNSIfazDqEcwhcOjSL:matrix.org

disko

358 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko92 Servers

Load older messages


SenderMessageTime
12 Feb 2025
@lassulus:lassul.uslassulusIf you have a PR with the current state I could take a look tomorrowish15:17:11
@zvzg:matrix.orgSávioThanks for the comments on the binfmt PR. I'm still working on it17:27:57
@projectinitiative:matrix.orgprojectinitiativeCurrent state of my attempt: https://github.com/nix-community/disko/pull/96118:42:48
13 Feb 2025
@projectinitiative:matrix.orgprojectinitiative lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types at alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list 22:30:01
@projectinitiative:matrix.orgprojectinitiative
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted                                                                                   [                                                                                                       [                                                                                                       "bcachefs"                                                                                            "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:31:20
@projectinitiative:matrix.orgprojectinitiative *
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted[                                                                                                       [                                                                                                       "bcachefs"                                                                                            "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:32:43
@projectinitiative:matrix.orgprojectinitiative *
sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs;  mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })

nix-repl> :p sorted
[ 
  [ 
    "bcachefs" 
    "pool1"
  ]
  [
    "disk"
    "bcachefsdisk1"
  ]
  [
    "disk"
    "bcachefsdisk2"
  ]
  [
    "disk"
    "bcachefsmain"
  ]
  [
    "disk"
    "cache"
  ]
  [
    "disk"
    "data1"
  ]
  [
    "disk"
    "data2"
  ]
  [
    "disk"
    "data3"
  ]
  [
    "disk"
    "dedup1"
  ]
  [
    "disk"
    "dedup2"
  ]
  [
    "disk"
    "dedup3"
  ]
  [
    "disk"
    "disk1"
  ]
  [
    "disk"
    "disk2"
  ]
  [
    "disk"
    "log1"
  ]
  [
    "disk"
    "log2"
  ]
  [
    "disk"
    "log3"
  ]
  [
    "disk"
    "spare"
  ]
  [
    "disk"
    "special1"
  ]
  [
    "disk"
    "special2"
  ]
  [
    "disk"
    "special3"
  ]
  [
    "mdadm"
    "raid1"
  ]
  [
    "zpool"
    "zroot"
  ]
]
22:33:28
@projectinitiative:matrix.orgprojectinitiative * lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types that are alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list 22:34:14
@projectinitiative:matrix.orgprojectinitiativeMade a PR with the function changes that fixed my other problem: https://github.com/nix-community/disko/pull/96822:45:49
14 Feb 2025
@lassulus:lassul.uslassulushmm, weird that this bug appears, I thought the bcachefs devices would depend on the disks and would be created afterwards automatically by the sorting03:56:01
@lassulus:lassul.uslassulusI guess I have to take a deeper look again after my hangover passed03:56:11
@projectinitiative:matrix.orgprojectinitiative

Another weird oddity: I am not sure that the GPT formatter is respecting the device paths. When running the test VM against the mdadm config, I would have assumed it would fail since the example file doesn't have a valid device path. (/dev/my-drive) Checking the logs, it just looks like it picks the next disk in the sequence:

machine # [   62.117159]  vdb: vdb1 vdb2
machine # [   62.244857] systemd[1]: Finished Virtual Console Setup.
machine # [   63.165182]  vdb: vdb1 vdb2
machine # + partprobe /dev/vdb
machine # + udevadm trigger --subsystem-match=block
machine # + udevadm settle
machine # + device=/dev/disk/by-partlabel/disk-disk1-mdadm
machine # + name=raid1
machine # + type=mdraid
machine # + echo /dev/disk/by-partlabel/disk-disk1-mdadm
machine # + cat /tmp/tmp.Z9w0tCq6Ai/raid_raid1
machine # + device=/dev/vdc
machine # + imageName=disk2
machine # + imageSize=2G
machine # + name=disk2
machine # + type=disk
machine # + device=/dev/vdc
machine # + efiGptPartitionFirst=1
machine # + type=gpt
machine # + blkid /dev/vdc
machine # /dev/vdc: PTUUID="c27c8535-3a4b-4d3d-99a6-bd5b541b52a1" PTTYPE="gpt"
{
  disko.devices = {
    disk = {
      disk1 = {
        type = "disk";
        device = "/dev/my-disk";
        content = {
          type = "gpt";
          partitions = {
            boot = {
              size = "1M";
              type = "EF02"; # for grub MBR
            };
            mdadm = {
              size = "100%";
              content = {
                type = "mdraid";
                name = "raid1";
              };
            };
          };
        };
      };
      disk2 = {
        type = "disk";
        device = "/dev/my-disk2";
        content = {
          type = "gpt";
          partitions = {
            boot = {
              size = "1M";
              type = "EF02"; # for grub MBR
            };
            mdadm = {
              size = "100%";
              content = {
                type = "mdraid";
                name = "raid1";
              };
            };
          };
        };
      };
    };
    mdadm = {
      raid1 = {
        type = "mdadm";
        level = 1;
        content = {
          type = "gpt";
          partitions = {
            primary = {
              size = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
    };
  };
}

I tested this by changing the path to /dev/disk/by-path/virtio-pci-0000:00:0a.0 which maps to
lrwxrwxrwx 1 root root 9 Feb 14 05:33 virtio-pci-0000:00:0a.0 -> ../../vdc
yet it completely ignores that. Maybe I am not understanding this correctly O.o

05:49:13
@projectinitiative:matrix.orgprojectinitiative Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding the test. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing 16:03:30
@projectinitiative:matrix.orgprojectinitiative * Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding device property for tests. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing 16:03:56
@lassulus:lassul.uslassulusBecause we want to test real disko configs in VMS and not just configs crafted for VMS16:18:29
@projectinitiative:matrix.orgprojectinitiativeright, but if the config is mutated to fit a VM, isn't that essentially the same thing? 16:19:26
@projectinitiative:matrix.orgprojectinitiative* right, but if the config is mutated internally to fit a VM, isn't that essentially the same thing?16:19:51
@lassulus:lassul.uslassulusprobably? we mainly use this so wen can have config.system.build.installTest or vmWithDisko on every machine that uses disko16:21:26
@projectinitiative:matrix.orgprojectinitiativeI say this because I ran into this issue because of the mutation, the default device appears empty, so the default value in the gpt module tries to set the device path to the partition label based on the disk config's top level name, but that won't always exist, especially if the list of device names are mapped to /dev/vd* 16:23:42
@projectinitiative:matrix.orgprojectinitiativeStill trying to wrap my head around the whole arch and testing framework. Things seem to work how I understand it with real hardware. Even on am external test VM. When using the built in testing framework and VM the behavior seems different than my understanding. Might still be missing a piece 16:27:00
@projectinitiative:matrix.orgprojectinitiative* Still trying to wrap my head around the whole arch and testing framework. Things seem to work how I understand it with real hardware. Even on an externaltest VM. When using the built in testing framework and VM the behavior seems different from my understanding. Might still be missing a piece 16:28:59
@lassulus:lassul.uslassulusI'm not sure I understand the issue but yeah, the testframework is a bit underdocumented and I mostly did it in one big step16:34:58
@projectinitiative:matrix.orgprojectinitiativeI'll keep exploring through it. I appreciate the patience with my questions16:52:03
15 Feb 2025
@benjb83:matrix.orgBenjB83 joined the room.10:17:34
@benjb83:matrix.orgBenjB83 changed their display name from Benjamín Buske to BenjB83.10:43:13
18 Feb 2025
@projectinitiative:matrix.orgprojectinitiative

Did some more investigating, and running into an issue with sgdisk not creating the partition labels when using GPT:

sgdisk --align-end --new=1:0:-0 --partition-guid=1:R --change-name=1:disk-disk3-bcachefs --typecode=1:8300 /dev/vdd

The auto populated /dev/disk/by-partlabel/disk-disk3-bcachefs path doesn't exist. I was reading some issues on github about a potential length issue, so I reduced the names of the disks to disk1 disk2 and disk3. Still facing this issue. Any insights would be great

15:15:54
@projectinitiative:matrix.orgprojectinitiative *

Did some more investigating, and running into an issue with sgdisk not creating the partition labels when using the GPT module:

sgdisk --align-end --new=1:0:-0 --partition-guid=1:R --change-name=1:disk-disk3-bcachefs --typecode=1:8300 /dev/vdd

The auto populated /dev/disk/by-partlabel/disk-disk3-bcachefs path doesn't exist. I was reading some issues on github about a potential length issue, so I reduced the names of the disks to disk1 disk2 and disk3. Still facing this issue. Any insights would be great

15:16:10
@lassulus:lassul.uslassulusI have a memory of some udev rules being required to create partition labels15:31:12
@lassulus:lassul.uslassulusMaybe that is the problem here15:31:20
@projectinitiative:matrix.orgprojectinitiativeMight be. I tried creating them interactively in the test VM and I couldn't get the labels to create. Curious if there is a different way to glean the newly created part. Maybe through the new UUID feature somehow.16:00:01

Show newer messages


Back to Room ListRoom Version: 10