| 9 Nov 2022 |
@uep:matrix.org | which is a reasonable name | 07:08:15 |
@uep:matrix.org | anyway, at least when it appears we can just use it via systemd | 07:20:01 |
@elvishjerricco:matrix.org | Yea. Stage 2 ought to be able to use the cache file in a systemd generator to just generate the device dependencies for an import service, so that probably should have been done years ago | 07:20:53 |
@elvishjerricco:matrix.org | but stage 1 can't have such a cache file | 07:21:03 |
@uep:matrix.org | https://github.com/openzfs/zfs/issues/10891#issuecomment-1226230262 | 07:28:07 |
@uep:matrix.org | this was my next thought! :) | 07:28:16 |
@uep:matrix.org | for the case of the desktop with the slow wait, it's a single nvme device that's clearly loaded. | 07:28:36 |
@uep:matrix.org | maybe a config option, off by default, to remove that dependency, usable for simple cases | 07:29:22 |
@elvishjerricco:matrix.org | uhhh we don't use the upstream zfs import services in nixos. We generate our own import services, one per pool. And while it depends on udev-settle, it shouldn't even exist if it's your root pool | 07:30:15 |
@elvishjerricco:matrix.org | so I dunno what that guy's talking about | 07:30:20 |
@elvishjerricco:matrix.org | but we can't remove it even for simple cases, because we do need to wait for the appropriate device | 07:30:56 |
@elvishjerricco:matrix.org | Now, in my system... I did hax and cheats | 07:31:10 |
@elvishjerricco:matrix.org | And mine actually properly just waits on my nvme | 07:31:18 |
@elvishjerricco:matrix.org | but I had to hard code the device | 07:31:24 |
@uep:matrix.org | yah | 07:31:33 |
@elvishjerricco:matrix.org | * uhhh we don't use the upstream zfs import services in nixos. We generate our own import services, one per pool. And while it depends on udev-settle, it shouldn't even exist (in stage 2) if it's your root pool | 07:32:11 |
@uep:matrix.org | removing the dep / masking the service is very much the same.. racy, but basically always going to work? | 07:32:15 |
@elvishjerricco:matrix.org | I dunno what you mean | 07:32:30 |
@elvishjerricco:matrix.org | Oh I do have a stopgap idea though | 07:33:00 |
@elvishjerricco:matrix.org | We could add an option like zfs.pools.<name>.devices and make it so that if it's set for a given pool, that pool's import service depends on those device units instead of udev-settle | 07:33:45 |
@uep:matrix.org | nvme is pretty much always going to be there by the time any of this is running, basically... technically no, because the initrd was loaded by grub, but still | 07:34:09 |
@elvishjerricco:matrix.org | yea, I would not be willing to hang my hat on that assumption | 07:34:24 |
@uep:matrix.org | sure, but that's what masking the service does. | 07:35:35 |
@uep:matrix.org | being more specific is good | 07:35:43 |
@elvishjerricco:matrix.org | masking what service? | 07:35:52 |
@uep:matrix.org | settle | 07:36:03 |
@uep:matrix.org | * udev-settle.service | 07:36:14 |
@elvishjerricco:matrix.org | oh, yea masking that will just mean that anything that depends on it is liable to break | 07:36:27 |
@elvishjerricco:matrix.org | and I wouldn't want to do that | 07:36:33 |
@uep:matrix.org | In reply to @elvishjerricco:matrix.org We could add an option like zfs.pools.<name>.devices and make it so that if it's set for a given pool, that pool's import service depends on those device units instead of udev-settle how does it figure out which of those need to be in stage 1? | 07:36:57 |