OfBorg | 172 Members | |
| Number of builds and evals in queue: <TBD> | 64 Servers |
| Sender | Message | Time |
|---|---|---|
| 26 Oct 2021 | ||
| 02:33:09 | ||
| So when is it planned to fix ofborg? | 07:16:57 | |
| Sorry all -- ofborg has been fixed now :) | 21:35:00 | |
| What was the problem? | 21:47:00 | |
| I kinda explained it over in the infra channel: https://matrix.to/#/!RROtHmAaQIkiJzJZZE:nixos.org/$aiFB6GoYIjxds1GSr8kcDxaqYjYciDDn1Dsr_Nk-YBU?via=nixos.org&via=matrix.org&via=fairydust.space | 21:48:17 | |
| Basically, just free space issues (in addition to some of the other evaluators falling over) | 21:48:51 | |
| I have plans to try and address the underlying issue (which is inodes-related and not exactly just "free disk space" related) | 21:49:28 | |
| but we'll see how that works out :P | 21:49:36 | |
| Oh wow, we have only one Darwin builder? o.O | 21:49:55 | |
| Yep ^^; | 21:50:06 | |
| Darwin is espensive | 21:50:14 | |
| * | 21:53:03 | |
| Lol | 22:37:28 | |
| I doubt that would get any traction, but I can't deny that it would be nice... | 22:37:48 | |
In reply to @piegames:matrix.org"It is a constant pain point holding back core updates and is severely lacking maintainers, machinery and knowledge." | 22:43:36 | |
| 27 Oct 2021 | ||
In reply to @sandro:supersandro.deFeel free https://md.darmstadt.ccc.de/ErrTQBYpRICwMCzvvo16Yw | 14:12:09 | |
In reply to @piegames:matrix.orgI added my favorite story when I broke Darwin stdenv by enabling brotli support in curl | 14:45:53 | |
| Can anyone help with this weird ofborg issue? It seems to be losing connections to some of the aarch64 builders. I've written down what I've seen at https://github.com/NixOS/ofborg/issues/581 | 19:11:38 | |
| Does the code register a panic handler? I strongly recommend this for multi-threaded applications where it is desired that one panicking thread ends the whole application | 19:13:18 | |
| * Does the OfBorg code use a panic handler somehow? I strongly recommend this for multi-threaded applications where it is desired that one panicking thread ends the whole application | 19:13:38 | |
| I don't think so | 19:22:14 | |
| but yeah basically it'd be great if a panic in any thread just bonked the whole thing over | 19:22:31 | |
| it looks ilke it isn't actually panicking though | 19:25:43 | |
| 28 Oct 2021 | ||
| 20:04:41 | ||
| 2 Nov 2021 | ||
| 19:23:08 | ||
| 19:24:23 | ||
| 3 Nov 2021 | ||
| I'm currently redeploying all the evaluators to run on ZFS, which should hopefully prevent the recent space issues (inode related) from reappearing. If you see any problems, please ping me directly either here or in any related issues / PRs. | 17:41:06 | |
| 22:01:14 | ||
| Just following up: it is done! All the evaluators are running off of ZFS now! Originally, the machines were run off of EXT4. For years, it seems like this has EXT4 has a limited amount of inodes (I believe it's somewhere around 4 Enter ZFS: ZFS gives you unlimited inodes (in that it doesn't have a fixed Then, in order to actually get the machines running ZFS, I had to do a few
For the moment, things seem to be working just fine! Fingers crossed, no more | 22:10:27 | |
| * Just following up: it is done! All the evaluators are running off of ZFS now! Originally, the machines were run off of EXT4. For years, it seems like this has EXT4 has a limited amount of inodes (I believe it's somewhere around 4 Enter ZFS: ZFS gives you unlimited inodes (in that it doesn't have a fixed Then, in order to actually get the machines running ZFS, I had to do a few
For the moment, things seem to be working just fine! Fingers crossed, no more | 22:10:31 | |