| 11 Feb 2025 |
projectinitiative | I will note I also tried with the zfs-with-vdevs example adding it to the test machine config, and it also only has one disk listed under dependencies despite having several listed in the example file under zroot | 14:24:54 |
lassulus | hmm, maybe there is indeed a bug | 14:25:56 |
lassulus | which didn't get triggered yet, but I thought the complex example would take care of that | 14:26:17 |
projectinitiative | reproduction steps:
nixosConfigurations.testmachine = lib.nixosSystem {
system = "x86_64-linux";
modules = [
./tests/disko-install/configuration.nix
# ./example/hybrid.nix
./module.nix
# ./example/bcachefs-multi-disk.nix
# ./example/mdadm.nix
./example/zfs-with-vdevs.nix
];
- add different examples to load and play around with
- load the flake into nix repl
- look at some of the _meta.deviceDependencies
| 14:27:29 |
projectinitiative | * reproduction steps:
nixosConfigurations.testmachine = lib.nixosSystem {
system = "x86_64-linux";
modules = [
./tests/disko-install/configuration.nix
# ./example/hybrid.nix
./module.nix
# ./example/bcachefs-multi-disk.nix
# ./example/mdadm.nix
./example/zfs-with-vdevs.nix
];
- add different examples to load and play around with toggling various configs
- load the flake into nix repl
- look at some of the _meta.deviceDependenciese
| 14:27:59 |
lassulus | I will try to add that to my thaigersprint todo list tomorrow :D | 14:28:16 |
projectinitiative | Obligatory: I wish nix had better testing/debugging tools, this is taking way longer to nail down my overall issue haha... | 14:30:10 |
shift | In reply to @lassulus:lassul.us I will try to add that to my thaigersprint todo list tomorrow :D You want issues for your sprint? Haha | 20:09:11 |
| 12 Feb 2025 |
lassulus | You have more? :D | 01:38:19 |
projectinitiative | In reply to @lassulus:lassul.us I will try to add that to my thaigersprint todo list tomorrow :D I looked up that thaigersprint was, that seems unique. Talk about a cool experience | 03:48:41 |
projectinitiative | * I looked up what thaigersprint was, that seems unique. Talk about a cool experience | 03:48:49 |
lassulus | ok, deepMergeMap actually doesn't merge lists, since recursiveUpdate doesn't take care of that | 05:31:21 |
lassulus | projectinitiative: https://github.com/nix-community/disko/pull/963 | 10:21:50 |
projectinitiative | looks like this fixed the deviceDendencies list not showing up! I now just have to figure out why my type ordering is not working correctly. | 14:49:36 |
lassulus | If you have a PR with the current state I could take a look tomorrowish | 15:17:11 |
Sávio | Thanks for the comments on the binfmt PR. I'm still working on it | 17:27:57 |
projectinitiative | Current state of my attempt:
https://github.com/nix-community/disko/pull/961 | 18:42:48 |
| 13 Feb 2025 |
projectinitiative | lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types at alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list | 22:30:01 |
projectinitiative | sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs; mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })
nix-repl> :p sorted [ [ "bcachefs" "pool1"
]
[
"disk"
"bcachefsdisk1"
]
[
"disk"
"bcachefsdisk2"
]
[
"disk"
"bcachefsmain"
]
[
"disk"
"cache"
]
[
"disk"
"data1"
]
[
"disk"
"data2"
]
[
"disk"
"data3"
]
[
"disk"
"dedup1"
]
[
"disk"
"dedup2"
]
[
"disk"
"dedup3"
]
[
"disk"
"disk1"
]
[
"disk"
"disk2"
]
[
"disk"
"log1"
]
[
"disk"
"log2"
]
[
"disk"
"log3"
]
[
"disk"
"spare"
]
[
"disk"
"special1"
]
[
"disk"
"special2"
]
[
"disk"
"special3"
]
[
"mdadm"
"raid1"
]
[
"zpool"
"zroot"
]
]
| 22:31:20 |
projectinitiative | * sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs; mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })
nix-repl> :p sorted[ [ "bcachefs" "pool1"
]
[
"disk"
"bcachefsdisk1"
]
[
"disk"
"bcachefsdisk2"
]
[
"disk"
"bcachefsmain"
]
[
"disk"
"cache"
]
[
"disk"
"data1"
]
[
"disk"
"data2"
]
[
"disk"
"data3"
]
[
"disk"
"dedup1"
]
[
"disk"
"dedup2"
]
[
"disk"
"dedup3"
]
[
"disk"
"disk1"
]
[
"disk"
"disk2"
]
[
"disk"
"log1"
]
[
"disk"
"log2"
]
[
"disk"
"log3"
]
[
"disk"
"spare"
]
[
"disk"
"special1"
]
[
"disk"
"special2"
]
[
"disk"
"special3"
]
[
"mdadm"
"raid1"
]
[
"zpool"
"zroot"
]
]
| 22:32:43 |
projectinitiative | * sorted = lib.sortDevicesByDependencies nixosConfigurations.testmachine.config.disko.devices._meta.deviceDependencies ({ bcachefs = nixosConfigurations.testmachine.config.disko.devices.bcachefs; mdadm = nixosConfigurations.testmachine.config.disko.devices.mdadm; zpool = nixosConfigurations.testmachine.config.disko.devices.zpool; disk = nixosConfigurations.testmachine.config.disko.devices.disk; })
nix-repl> :p sorted
[
[
"bcachefs"
"pool1"
]
[
"disk"
"bcachefsdisk1"
]
[
"disk"
"bcachefsdisk2"
]
[
"disk"
"bcachefsmain"
]
[
"disk"
"cache"
]
[
"disk"
"data1"
]
[
"disk"
"data2"
]
[
"disk"
"data3"
]
[
"disk"
"dedup1"
]
[
"disk"
"dedup2"
]
[
"disk"
"dedup3"
]
[
"disk"
"disk1"
]
[
"disk"
"disk2"
]
[
"disk"
"log1"
]
[
"disk"
"log2"
]
[
"disk"
"log3"
]
[
"disk"
"spare"
]
[
"disk"
"special1"
]
[
"disk"
"special2"
]
[
"disk"
"special3"
]
[
"mdadm"
"raid1"
]
[
"zpool"
"zroot"
]
]
| 22:33:28 |
projectinitiative | * lassulus I think I discovered another bug. I fixed the issue by renaming my type to "dxcachefs" instead of "bcachefs" In the dependency sort list, if there are any types that are alphabetically before the "disk" type, the sortDeviceDependencies function drops those at the top. So my pool object was getting collated as the very first entry in the _create functions. I don't think this was caught because mraid and zpool always appear at the end of the device dependency list due to their position in the alphabet. I think the function will need a specific check to always put disk device types at the beginning of the list | 22:34:14 |
projectinitiative | Made a PR with the function changes that fixed my other problem:
https://github.com/nix-community/disko/pull/968 | 22:45:49 |
| 14 Feb 2025 |
lassulus | hmm, weird that this bug appears, I thought the bcachefs devices would depend on the disks and would be created afterwards automatically by the sorting | 03:56:01 |
lassulus | I guess I have to take a deeper look again after my hangover passed | 03:56:11 |
projectinitiative | Another weird oddity: I am not sure that the GPT formatter is respecting the device paths. When running the test VM against the mdadm config, I would have assumed it would fail since the example file doesn't have a valid device path. (/dev/my-drive) Checking the logs, it just looks like it picks the next disk in the sequence:
machine # [ 62.117159] vdb: vdb1 vdb2
machine # [ 62.244857] systemd[1]: Finished Virtual Console Setup.
machine # [ 63.165182] vdb: vdb1 vdb2
machine # + partprobe /dev/vdb
machine # + udevadm trigger --subsystem-match=block
machine # + udevadm settle
machine # + device=/dev/disk/by-partlabel/disk-disk1-mdadm
machine # + name=raid1
machine # + type=mdraid
machine # + echo /dev/disk/by-partlabel/disk-disk1-mdadm
machine # + cat /tmp/tmp.Z9w0tCq6Ai/raid_raid1
machine # + device=/dev/vdc
machine # + imageName=disk2
machine # + imageSize=2G
machine # + name=disk2
machine # + type=disk
machine # + device=/dev/vdc
machine # + efiGptPartitionFirst=1
machine # + type=gpt
machine # + blkid /dev/vdc
machine # /dev/vdc: PTUUID="c27c8535-3a4b-4d3d-99a6-bd5b541b52a1" PTTYPE="gpt"
{
disko.devices = {
disk = {
disk1 = {
type = "disk";
device = "/dev/my-disk";
content = {
type = "gpt";
partitions = {
boot = {
size = "1M";
type = "EF02"; # for grub MBR
};
mdadm = {
size = "100%";
content = {
type = "mdraid";
name = "raid1";
};
};
};
};
};
disk2 = {
type = "disk";
device = "/dev/my-disk2";
content = {
type = "gpt";
partitions = {
boot = {
size = "1M";
type = "EF02"; # for grub MBR
};
mdadm = {
size = "100%";
content = {
type = "mdraid";
name = "raid1";
};
};
};
};
};
};
mdadm = {
raid1 = {
type = "mdadm";
level = 1;
content = {
type = "gpt";
partitions = {
primary = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
};
}
I tested this by changing the path to /dev/disk/by-path/virtio-pci-0000:00:0a.0 which maps to lrwxrwxrwx 1 root root 9 Feb 14 05:33 virtio-pci-0000:00:0a.0 -> ../../vdc yet it completely ignores that. Maybe I am not understanding this correctly O.o
| 05:49:13 |
projectinitiative | Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding the test. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing | 16:03:30 |
projectinitiative | * Figured out that the prepareDiskoConfig function in the lib/tests framework was overriding device property for tests. Curious why that is a better option than just using the virtio path. Creates some confusion and technically isn't validating all aspects of the pathing | 16:03:56 |
lassulus | Because we want to test real disko configs in VMS and not just configs crafted for VMS | 16:18:29 |
projectinitiative | right, but if the config is mutated to fit a VM, isn't that essentially the same thing? | 16:19:26 |