12 Oct 2024 |
hexa | it even unlocks all five disks | 21:53:41 |
ElvishJerricco | If you check the journal, do you see any messages from systemd about unit ordering cycles? | 21:53:53 |
ElvishJerricco | (also check dmesg, because I think in worst case scenarios it won't be in the journal or something like that) | 21:54:17 |
hexa | [root@meduna:~]# journalctl -b 0 --grep=ordering
-- No entries --
| 21:54:34 |
hexa | dmeg also empty | 21:54:36 |
ElvishJerricco | try searching for cycle instead | 21:54:47 |
hexa | Oct 12 18:45:39 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 12 18:45:39 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484873504 ns
Oct 12 18:45:39 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6a7926cdfc3, max_idle_ns: 881590482831 ns
Oct 12 18:45:39 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 12 18:45:39 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 12 18:45:41 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6a7777116fa, max_idle_ns: 881590883556 ns
Oct 12 18:45:39 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 12 18:45:39 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484873504 ns
Oct 12 18:45:39 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6a7926cdfc3, max_idle_ns: 881590482831 ns
Oct 12 18:45:39 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 12 18:45:39 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 12 18:45:41 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6a7777116fa, max_idle_ns: 881590883556 ns
Oct 12 18:46:41 meduna kanidmd[2655]: faee6ba6-2f66-4bdc-8695-6040d380f00c INFO handle_purgerecycledevent [ 149µs | 100.00% ]
Oct 12 18:46:41 meduna kanidmd[2655]: 083d0a60-cd55-480e-a782-79527b9c1153 INFO handle_purgerecycledevent [ 152µs | 100.00% ]
Oct 12 18:56:41 meduna kanidmd[2655]: 8db836ac-6f1e-4b60-a98d-bd52c1f69f6c INFO handle_purgerecycledevent [ 124µs | 100.00% ]
Oct 12 19:06:41 meduna kanidmd[2655]: dfe9b086-00c8-49d6-91fe-555d7d214223 INFO handle_purgerecycledevent [ 161µs | 100.00% ]
Oct 12 19:16:41 meduna kanidmd[2655]: df7b6f4b-0d83-45c1-a4f2-a970a5229854 INFO handle_purgerecycledevent [ 173µs | 100.00% ]
Oct 12 19:26:41 meduna kanidmd[2655]: 0ede5aad-f1b2-40b7-a3c2-2a86ee0e89e4 INFO handle_purgerecycledevent [ 162µs | 100.00% ]
Oct 12 19:36:41 meduna kanidmd[2655]: 02b021db-7f6d-4857-974b-78562a1a528e INFO handle_purgerecycledevent [ 108µs | 100.00% ]
Oct 12 19:46:41 meduna kanidmd[2655]: 13d4c89b-983d-4cc0-9978-22b5d695e61e INFO handle_purgerecycledevent [ 192µs | 100.00% ]
Oct 12 19:56:41 meduna kanidmd[2655]: baab4fd5-f79f-4383-841d-8c501f7c981c INFO handle_purgerecycledevent [ 104µs | 100.00% ]
Oct 12 20:06:41 meduna kanidmd[2655]: aa9af569-090c-4632-8bae-f6e479884935 INFO handle_purgerecycledevent [ 134µs | 100.00% ]
Oct 12 20:16:41 meduna kanidmd[2655]: c50f4356-a52d-45a2-ac5f-6df71fd91dc4 INFO handle_purgerecycledevent [ 132µs | 100.00% ]
Oct 12 20:26:41 meduna kanidmd[2655]: bb4ab096-2616-4af0-b87c-d2c25b3857d6 INFO handle_purgerecycledevent [ 104µs | 100.00% ]
Oct 12 20:36:41 meduna kanidmd[2655]: 4ace62a0-b2cd-404a-9d18-a64e1bda1170 INFO handle_purgerecycledevent [ 104µs | 100.00% ]
Oct 12 20:46:41 meduna kanidmd[2655]: 4b8b5bec-457d-40b6-8195-69a1cefc49f9 INFO handle_purgerecycledevent [ 141µs | 100.00% ]
Oct 12 20:56:41 meduna kanidmd[2655]: 4afb7012-9fe2-4dc1-baee-6e0075ef1217 INFO handle_purgerecycledevent [ 110µs | 100.00% ]
Oct 12 21:06:41 meduna kanidmd[2655]: c473359c-9bf7-4490-8607-89b5ee7bfda7 INFO handle_purgerecycledevent [ 135µs | 100.00% ]
Oct 12 21:16:41 meduna kanidmd[2655]: 5a5e145e-14a3-44d1-9b63-6d74af4f5794 INFO handle_purgerecycledevent [ 114µs | 100.00% ]
Oct 12 21:26:41 meduna kanidmd[2655]: 4872735f-cfea-4c95-994d-369da3f0a9fa INFO handle_purgerecycledevent [ 148µs | 100.00% ]
Oct 12 21:36:41 meduna kanidmd[2655]: 9a3c3fc6-415d-47ea-b4a6-e8343ac70078 INFO handle_purgerecycledevent [ 110µs | 100.00% ]
Oct 12 21:46:41 meduna kanidmd[2655]: f87773d8-68a8-47d3-b555-19c52852b239 INFO handle_purgerecycledevent [ 86.6µs | 100.00% ]
| 21:54:57 |
hexa | 🙂 | 21:54:59 |
ElvishJerricco | yea nothing from systemd | 21:55:10 |
hexa | let me post the whole initrd log | 21:55:25 |
ElvishJerricco | mjm: sorry to bother you on something unrelated to you, but you're my current reference for someone that can vouch for systemd logging cycles even without debug logging, right? | 21:55:45 |
hexa | https://paste.lossy.network/raw/NOPRNVVTOZFRKJIGUCPYOJ4KYY | 21:56:24 |
mjm | yeah, i had no debug logging on and was finding messages about ordering cycles in my journal. less certain about the word "cycle" but "ordering" was definitely there | 21:56:56 |
ElvishJerricco | In reply to @hexa:lossy.network https://paste.lossy.network/raw/NOPRNVVTOZFRKJIGUCPYOJ4KYY yea everything there seems just fine except for these two lines:
Oct 12 18:46:14 localhost systemd-tty-ask-password-agent[269]: Invalid password file /run/systemd/ask-password/ask.y3lcd7
Oct 12 18:46:14 localhost systemd-tty-ask-password-agent[269]: Failed to process password: Bad message
| 22:00:16 |
ElvishJerricco | btw is this 24.05? I don't think the hostname should be localhost on unstable nowadays | 22:01:10 |
ElvishJerricco | oh, for sure it is because you said your systemd version was 255 | 22:01:40 |
ElvishJerricco | In reply to @hexa:lossy.network
-bash-5.2# systemctl default
🔐 Please enter passphrase for disk Linux filesystem (juno):
A dependency job for initrd.target failed. See 'journalctl -xe' for details.
-bash-5.2# systemd-tty-ask-password-agent
-bash-5.2# reboot
Failed to connect to bus: Connection refused
-bash-5.2#
Read from remote host unlock.juno.lossy.network: Connection reset by peer
Connection to unlock.juno.lossy.network closed.
When you see this "A dependency job for initrd.target failed" message, can you run systemctl --failed before anything else? | 22:05:17 |
hexa | while in initrd? | 22:06:04 |
ElvishJerricco | yea | 22:06:09 |
ElvishJerricco | If it's saying a job failed, I want to know what job | 22:06:18 |
hexa | ok, let me reboot real quick | 22:06:51 |
hexa | that was a failed password fwiw | 22:07:11 |
ElvishJerricco | oh, so it's unrelated? | 22:07:23 |
hexa | mh wait no, it wasn't 🤔 | 22:07:52 |
hexa | on a failed password it immediately asks again | 22:08:05 |
hexa | that didn't happen earlier | 22:08:10 |
hexa | but this time it just went through cleanly | 22:08:19 |
hexa | I try to remember to grab the failed units the next time I see it | 22:08:39 |
ElvishJerricco | hm interesting | 22:09:40 |
ElvishJerricco | yea I would have expected it to ask again immediately | 22:09:48 |