| 9 Nov 2022 |
@elvishjerricco:matrix.org | We could add an option like zfs.pools.<name>.devices and make it so that if it's set for a given pool, that pool's import service depends on those device units instead of udev-settle | 07:33:45 |
@uep:matrix.org | nvme is pretty much always going to be there by the time any of this is running, basically... technically no, because the initrd was loaded by grub, but still | 07:34:09 |
@elvishjerricco:matrix.org | yea, I would not be willing to hang my hat on that assumption | 07:34:24 |
@uep:matrix.org | sure, but that's what masking the service does. | 07:35:35 |
@uep:matrix.org | being more specific is good | 07:35:43 |
@elvishjerricco:matrix.org | masking what service? | 07:35:52 |
@uep:matrix.org | settle | 07:36:03 |
@uep:matrix.org | * udev-settle.service | 07:36:14 |
@elvishjerricco:matrix.org | oh, yea masking that will just mean that anything that depends on it is liable to break | 07:36:27 |
@elvishjerricco:matrix.org | and I wouldn't want to do that | 07:36:33 |
@uep:matrix.org | In reply to @elvishjerricco:matrix.org We could add an option like zfs.pools.<name>.devices and make it so that if it's set for a given pool, that pool's import service depends on those device units instead of udev-settle how does it figure out which of those need to be in stage 1? | 07:36:57 |
@elvishjerricco:matrix.org | It literally just has a dumb check on your fileSystems with ZFS FSes, and checks if any of them are neededForBoot | 07:37:23 |
@elvishjerricco:matrix.org | Then the appropriate pools are done in stage 1 instead of stage 2 | 07:37:34 |
@uep:matrix.org | realistically you'd only care to specify those for boot timing-sensitive ones, but still | 07:37:44 |
@uep:matrix.org | oh, sure. | 07:37:46 |
@uep:matrix.org | I think that would work great, and it's really only an optimisation anyway | 07:38:33 |
@elvishjerricco:matrix.org | but a zfs.pools.<name> option sounds like a good idea generally anyway. Like we could specify extra pools that need to be done in stage 1 for whatever reason; we could specify device units to depend on; we could specify whether it should be automounted; etc. | 07:38:49 |
@uep:matrix.org | tiny rpool device and nix store on something else, perhaps.. | 07:42:39 |
@uep:matrix.org | rare, somewhat contrived, but moderately plausible scenarios | 07:43:01 |
@elvishjerricco:matrix.org | well nixos would still detect that | 07:43:11 |
@elvishjerricco:matrix.org | it detects all pools with fileSystems that are neededForBoot, which of course includes the one containing /nix/store | 07:43:37 |
@uep:matrix.org | because of the neededForBoot, right. | 07:43:44 |
@uep:matrix.org | I guess /var/log is also there, which was my other scenario, but I'm sure there are more | 07:44:28 |
@uep:matrix.org | oh, an even better one. again with the multi-host scenario: don't actually boot unless you can import the whatever-active-data pool | 07:46:14 |
@uep:matrix.org | In reply to @elvishjerricco:matrix.org but a zfs.pools.<name> option sounds like a good idea generally anyway. Like we could specify extra pools that need to be done in stage 1 for whatever reason; we could specify device units to depend on; we could specify whether it should be automounted; etc. different scrub options / schedules per pool | 07:49:51 |
@elvishjerricco:matrix.org | yep, very good idea | 07:50:13 |
| 10 Nov 2022 |
Paul Haerle | iiuc, only openvpn-in-initrd is blocking https://github.com/NixOS/nixpkgs/pull/169116 ? Is that deemed important for backwards compability? | 09:46:35 |
Paul Haerle | Not sure hwo many people are connecting to openvpn networks from their initrd; i personally don't. But if that's all that is needed for a merge, I'd be willing to invest a day or so into that project :) | 09:49:03 |
@me:linj.tech | How can I get the log when stage 1 fails? | 17:45:30 |
@me:linj.tech |  Download image.png | 17:46:00 |