| 10 Jun 2025 |
matthewcroughan @fosdem | wipefs/liblblkid from 2.39.4 doesn't deal with zfs signatures properly, but newer versions do, causing them to be visible, which confuses udev and systemd populating /dev/disk/by-partlabel | 11:55:12 |
matthewcroughan @fosdem | I doubt they'll fix it, it's pretty horrendous though | 11:55:28 |
matthewcroughan @fosdem | For me the experience was as follows:
- Use disko (util-linux <2.39.4) months ago to make a zfs root disk
- Use disko (util-linux <2.39.4) later to make a bcachefs root disk
- Everything is fine
- Upgrade to NixOS 25.05
- Timeouts on boot mounting because /dev/disk/by-partlabel for the bcachefs disk doesn't exist on NixOS 25.05, but does on 24.11
The reason turned out to be the above.
| 11:57:56 |
matthewcroughan @fosdem | * For me the experience was as follows:
- Use disko (util-linux <2.39.4) months ago to make a zfs root disk
- Use disko (util-linux <2.39.4) later to make a bcachefs root disk on the same disk
- Everything is fine
- Upgrade to NixOS 25.05
- Timeouts on boot mounting because /dev/disk/by-partlabel for the bcachefs disk doesn't exist on NixOS 25.05, but does on 24.11
The reason turned out to be the above.
| 11:58:07 |
matthewcroughan @fosdem | Step 2, wipefs(util-linux) did not clear the zfs signatures, but newer wipefs(util-linux) will | 11:58:38 |
matthewcroughan @fosdem | * Step 2, wipefs(util-linux 2.3x) did not clear the zfs signatures, but newer wipefs(util-linux 2.4x) will | 11:58:57 |
Morgan (@numinit) | I wonder if we ought to revert the ZFS detection patch. It's only if you had zfs on there and switch to another format | 14:00:40 |
Morgan (@numinit) | It's now too good at detecting ZFS | 14:02:04 |
Morgan (@numinit) | Oh, I see you can wipefs --offset | 14:05:08 |
caraiiwala | Finally got my ZFS config above to work. RAID0 passthrough works. Didn't change anything but rebooted one more time and it worked... 😑 Works on both systems though | 18:38:14 |
caraiiwala | I still have an issue on both systems now where it boots into emergency mode. It seems to fail to mount my raid pool's dataset at /mnt/raid. Yet when I continue after pressing Enter, everything seems fine. fstab is correct. /mnt/raid exists. systemctl status mnt-raid.mount is active | 18:40:17 |
caraiiwala | journalctl -xb says "zfs_mount_at() failed: mountpoint or dataset is busy" as the reason | 18:44:59 |
caraiiwala | journal is also full of refused connections from my desktop and the network router which is weird. i can connect over ssh | 18:47:34 |
caraiiwala | oh the connection errors are something else nvm | 18:53:46 |
| 11 Jun 2025 |
| elamon joined the room. | 15:15:02 |