!PSmBFWNKoXmlQBzUQf:helsinki-systems.de

Stage 1 systemd

79 Members
systemd in NixOs's stage 1, replacing the current bash tooling https://github.com/NixOS/nixpkgs/projects/5127 Servers

Load older messages


SenderMessageTime
28 May 2023
@elvishjerricco:matrix.orgElvishJerricco After=local-fs.target is implied for services (transitively, anyway) 21:51:37
@elvishjerricco:matrix.orgElvishJerricco so you have to do DefaultDependencies=no to avoid it 21:51:48
@hexa:lossy.network@hexa:lossy.network
  boot.initrd.systemd.services.rollback = {
    description = "Rollback ZFS datasets to a pristine state";
    wantedBy = [
      "initrd.target"
    ]; 
    after = [
      "zfs-import-zroot.service"
    ];
    before = [ 
      "sysroot.mount"
    ];
    path = with pkgs; [
      zfs
    ];
    unitConfig.DefaultDependencies = "no";
    serviceConfig.Type = "oneshot";
    script = ''
      set -ex
      zfs rollback -r zroot/local/root@blank && echo "rollback complete"
    '';
  };
21:58:24
@hexa:lossy.network@hexa:lossy.networkwill gladly repost till eternity 🙂21:58:35
@hexa:lossy.network@hexa:lossy.network * will gladly repost in eternity 🙂21:59:15
@winterqt:nixos.devWinter (she/her)
In reply to @lily:lily.flowers
Oh actually, was it just timing out?
Maybe? But like... crashing to an emergency shell with no other messages is... not good UX.
22:02:30
@lily:lily.flowers@lily:lily.flowers
In reply to @hexa:lossy.network
will gladly repost in eternity 🙂
I would say you probably also want after = "local-fs-pre.target"; for hibernation resume reasons, but it's ZFS so I'm pretty sure resume doesn't work anyway 😛
22:02:53
@lily:lily.flowers@lily:lily.flowers* I would say you probably also want `after = [ "local-fs-pre.target" ];` for hibernation resume reasons, but it's ZFS so I'm pretty sure resume doesn't work anyway 😛22:03:06
@elvishjerricco:matrix.orgElvishJerricco
In reply to @winterqt:nixos.dev
Maybe? But like... crashing to an emergency shell with no other messages is... not good UX.
I mean, what else is there to do? If a critical thing fails, an emergency shell is really the only option
22:03:14
@hexa:lossy.network@hexa:lossy.networkteam randomencrypted swap 😛22:03:24
@lily:lily.flowers@lily:lily.flowers
In reply to @hexa:lossy.network
team randomencrypted swap 😛
I mean the crimes required for hibernate/resume are kinda horrifying tbh. So this is probably the way
22:04:17
@hexa:lossy.network@hexa:lossy.networkagreed22:04:27
@elvishjerricco:matrix.orgElvishJerricco
In reply to @elvishjerricco:matrix.org
I mean, what else is there to do? If a critical thing fails, an emergency shell is really the only option
maybe we could make emergency.target output systemctl status --failed before starting emergency.service?
22:04:36
@lily:lily.flowers@lily:lily.flowers
In reply to @elvishjerricco:matrix.org
maybe we could make emergency.target output systemctl status --failed before starting emergency.service?
Now that sounds like a good idea, actually
22:04:57
@elvishjerricco:matrix.orgElvishJerricco yea we could put it in the ExecStartPre of emergency.service 22:05:39
@elvishjerricco:matrix.orgElvishJerriccosince it takes the TTY22:05:45
@elvishjerricco:matrix.orgElvishJerriccothat way we don't have to do silly hacks about it22:05:52
@hexa:lossy.network@hexa:lossy.network

zfs-scrub-start[3654385]: cannot open 'zpool': no such pool

22:05:55
@hexa:lossy.network@hexa:lossy.networklalalala22:05:58
@lily:lily.flowers@lily:lily.flowers
In reply to @elvishjerricco:matrix.org
yea we could put it in the ExecStartPre of emergency.service
Also I think it may need fixing with plymouth, btw. It only works now because plymouth crashes rather than being told to quit. I had a branch where I did fix that, but never PR'd it because I never PR'd the plymouth update/overhaul branch which I wanted first because time and effort and triage
22:06:53
@lily:lily.flowers@lily:lily.flowers(Also jtojnar never got back to me on it... which is fine, but it did lead to it slipping my mind)22:07:30
@elvishjerricco:matrix.orgElvishJerricco
In reply to @hexa:lossy.network

zfs-scrub-start[3654385]: cannot open 'zpool': no such pool

wait is it trying to scrub in initrd? We should fix that for sure
22:07:38
@hexa:lossy.network@hexa:lossy.networknah, unrelated22:07:44
@hexa:lossy.network@hexa:lossy.network just executed systemctl status --failed and noticed the failure 😄 22:07:54
@elvishjerricco:matrix.orgElvishJerricco Lily Foster: And yea, there's probably a bunch of plymouth stuff to do... 22:07:59
@lily:lily.flowers@lily:lily.flowers
In reply to @elvishjerricco:matrix.org
Lily Foster: And yea, there's probably a bunch of plymouth stuff to do...
I've done a lot, but I guess I never PR'd it. I think I was also still trying to muck around with luks in a nixos-rebuild build-vm for testing too
22:09:25
@lily:lily.flowers@lily:lily.flowersI'll make sure it's up-to-date and still working later in the week and opening it up + ping you on it if you want (also jan, again)22:10:28
@elvishjerricco:matrix.orgElvishJerriccosounds good22:10:41
29 May 2023
@winterqt:nixos.devWinter (she/her)
In reply to @lily:lily.flowers
I use initrd-root-device.target and initrd-root-fs.target and sysroot.mount for ordering
why that ordering (well, when it works) or hexa's, over mine? is there something wrong about mine? (i have no idea what i'm doing)
01:24:24
@winterqt:nixos.devWinter (she/her)also01:25:00

Show newer messages


Back to Room ListRoom Version: 6