| 10 Nov 2022 |
@me:linj.tech | this is the error if I remove one disk from a btrfs raid1 on luks | 18:12:04 |
@me:linj.tech | /dev/disk/by-label/luks-2 is the one removed | 18:12:49 |
@me:linj.tech | /dev/disk/by-label/luks-2 is still there | 18:14:19 |
@me:linj.tech | if one of these luks device is opened, /dev/disk/by-uuid/f93cfbf1-e9b4-46ca-b7cb-6f3fb1554fbb should appear, as showed in the bottom of the screenshot | 18:15:27 |
@me:linj.tech | but dev-disk-by-uuid-f93cfbf1-e9b4-46ca-b7cb-6f3fb1554fbb.device fails | 18:16:06 |
@me:linj.tech | any ideas? | 18:16:10 |
@me:linj.tech | * if one of these luks device is opened, /dev/disk/by-uuid/f93cfbf1-e9b4-46ca-b7cb-6f3fb1554fbb should appear, as showed at the bottom of the screenshot | 18:17:04 |
@me:linj.tech | * /dev/disk/by-label/luks-1 is still there | 18:23:46 |
@me:linj.tech | * if one of these luks devices is opened, /dev/disk/by-uuid/f93cfbf1-e9b4-46ca-b7cb-6f3fb1554fbb should appear, as showed at the bottom of the screenshot | 18:24:09 |
@elvishjerricco:matrix.org | wait so your root is on a btrfs mirror and you removed one disk? | 18:27:29 |
@elvishjerricco:matrix.org | Does that work on the old initrd? | 18:27:42 |
@me:linj.tech | In reply to @elvishjerricco:matrix.org Does that work on the old initrd? no, the old initrd just dies if one disk is missing. Because of that, I try this systemd one | 18:28:35 |
@elvishjerricco:matrix.org | Ah, yea I wouldn't really expect that to be supported. I was actually looking into this a bit yesterday and the btrfs udev rules shipped with systemd deliberately don't mark the disks as active until all of them are present | 18:29:25 |
@elvishjerricco:matrix.org | and | 18:29:37 |
@elvishjerricco:matrix.org | even if you're not using udev/systemd, the btrfs tools by default don't let you mount degraded | 18:29:56 |
@me:linj.tech | In reply to @elvishjerricco:matrix.org even if you're not using udev/systemd, the btrfs tools by default don't let you mount degraded degraded can be set I think | 18:30:38 |
@elvishjerricco:matrix.org | yea it has to be a mount option | 18:30:45 |
@elvishjerricco:matrix.org | but that doesn't affect the udev rule | 18:30:59 |
@elvishjerricco:matrix.org | I think I see how this works:
https://github.com/systemd/systemd/blob/main/rules.d/64-btrfs.rules.in
Basically there's a udev builtin btrfs ready that checks if a device is ready to be used as a btrfs FS; i.e. if all its partner disks are present. If not, it sets SYSTEMD_READY=0 so that it doesn't activate the .device unit. Once one of them says its ready, it allows SYSTEMD_READY to remain 1, then triggers the others to check again. This will (presumably) cause all the others to check and set SYSTEMD_READY=1. Now if any of those devices is the device unit for your mount unit, the mount unit can now activate
| 18:31:16 |
@elvishjerricco:matrix.org | ^ That's the message forwarded from what I found yesterday | 18:31:30 |
@elvishjerricco:matrix.org | So the rule basically says "no device is available until all are" | 18:32:28 |
@elvishjerricco:matrix.org | dunno if we include that rule in systemd-stage-1 or not | 18:32:43 |
@elvishjerricco:matrix.org | I think we do? | 18:32:45 |
@elvishjerricco:matrix.org | Looks like we do | 18:36:11 |
@me:linj.tech | thanks for the info. | 18:36:28 |
@elvishjerricco:matrix.org | In reply to @me:linj.tech no, the old initrd just dies if one disk is missing. Because of that, I try this systemd one I'd be curious what it looks like when the scripted initrd dies this way | 18:39:04 |
@me:linj.tech | kernel panic | 18:39:29 |
@elvishjerricco:matrix.org | linj: O_O | 18:39:39 |
@me:linj.tech | I can take a screenshot if you want | 18:39:46 |
@elvishjerricco:matrix.org | that would be wonderful | 18:39:52 |