| 21 Feb 2025 |
adamcstephens | i just can't isolate this and it's getting tiring. last couple days i've woken up to all instances stopped 😢 | 17:03:23 |
adamcstephens | wondering if this chatter is related https://github.com/openzfs/zfs/pull/16770#issuecomment-2670403317 | 17:15:23 |
adamcstephens | though that PR was only added in 2.2.7, and i first experienced the problem in 2.2.6 or before. unfortunately, this pool was built using 2.2+, so i can't downgrade to 2.1 | 17:18:16 |
adamcstephens | i'll have to roll back point releases, which is less fun | 17:18:28 |
adamcstephens | ok, trying 2.2.5 | 17:34:55 |
@rungmc:matrix.org | 2nd gen Epyc would be the machine I'd probably actually spot it on. Couldn't manage to force the issue yesterday after deciding it was a good time to bump Kanidm. | 17:45:55 |
@rungmc:matrix.org | Also have a Ryzen 9 and a whole mess of Intel systems in various states of functioning if you want to just take a shotgun approach. | 17:50:29 |
adamcstephens | so the kernel stack trace i have is similar to the issue i just pointed to | 17:50:36 |
adamcstephens | i'm trying to go back through my old logs and see if there's a different one that may have been happening prior to the introduction of that in 2.2.7 | 17:50:59 |
adamcstephens | unfortunately i only turned loki up 2 months ago | 17:56:22 |
adamcstephens | So this is impacting my two Ryzen systems. 3700x and 5900x. an intel 9th gen is not impacted | 18:02:18 |
adamcstephens | the two ryzens are mirrored zpools, the intel is a single disk | 18:02:38 |
adamcstephens | all running on top of luks | 18:03:03 |
adamcstephens | i can't figure out a reproducer outside of the two affected systemd | 18:04:14 |
adamcstephens | * i can't figure out a reproducer outside of the two affected systems | 18:04:16 |
adamcstephens | i even built a VM last night, with a mirrored zpool, but nope it works fine. | 18:05:17 |
@rungmc:matrix.org | My project from about 3 months ago was to hand more things over to ZFS proper... ditch legacy mounts and give native encryption a serious whirl (obviously at the annoying cost of send/receive). I somewhat doubt encryption is going to be a contributing factor, though. Incus is also mostly running off faster storage on my big systems which may matter. | 18:09:00 |
adamcstephens | yeah, i only use legacy mounts for the things they're require for | 18:14:05 |
adamcstephens | * yeah, i only use legacy mounts for the things they're required for | 18:14:08 |
adamcstephens | ugh, downgrading zfs is not as clear as i thought it'd be | 18:21:35 |
adamcstephens | yesterday one of my hosts immediately fell over running incus-benchmark. on 2.2.5 that same host is working as expected | 19:33:37 |
adamcstephens | for some reason switch-to-configuration and reboot of containers is causing problems, so i'll need to wait for the next 24.11 channel bump to get enough data | 19:34:16 |
| 24 Feb 2025 |
| Saturn changed their profile picture. | 22:36:15 |
| 2 Mar 2025 |
adamcstephens | I wish upstream would release patch versions instead of this... https://github.com/zabbly/incus/commit/a08a58c3d876655c464f0777c2525c8b952ca926 | 14:42:04 |
adamcstephens | i don't know of any ceph users of our package, but the patch fixing it is applied here in anyway. https://github.com/NixOS/nixpkgs/pull/386395 | 14:59:41 |
| 3 Mar 2025 |
adamcstephens | i feel confident saying that downgrading zfs to 2.2.5 has resolved my container restart hangs | 17:23:24 |
| 4 Mar 2025 |
adamcstephens | merged 6.10.1 and ui 0.15.0 https://github.com/NixOS/nixpkgs/pull/386395 | 00:23:46 |
adamcstephens | will backport incus after it hits unstable. the ui has too many changes to do so. | 00:24:27 |
hexa | oh cute | 16:30:58 |
hexa | now they support vrfs for routing | 16:31:03 |