!sBfrWMVsLoSyFTCkNv:nixos.org

OfBorg

175 Members
Number of builds and evals in queue: <TBD>66 Servers

Load older messages


SenderMessageTime
26 Oct 2021
@piegames:matrix.orgpiegamesOh wow, we have only one Darwin builder? o.O21:49:55
@cole-h:matrix.orgcole-hYep ^^;21:50:06
@cole-h:matrix.orgcole-hDarwin is espensive21:50:14
* @piegames:matrix.orgpiegames should really write a "downgrade Darwin support tier" RFC one day …21:53:03
@cole-h:matrix.orgcole-hLol22:37:28
@cole-h:matrix.orgcole-hI doubt that would get any traction, but I can't deny that it would be nice...22:37:48
@sandro:supersandro.deSandro 🐧
In reply to @piegames:matrix.org
should really write a "downgrade Darwin support tier" RFC one day …
"It is a constant pain point holding back core updates and is severely lacking maintainers, machinery and knowledge."
22:43:36
27 Oct 2021
@piegames:matrix.orgpiegames
In reply to @sandro:supersandro.de
"It is a constant pain point holding back core updates and is severely lacking maintainers, machinery and knowledge."
Feel free https://md.darmstadt.ccc.de/ErrTQBYpRICwMCzvvo16Yw
14:12:09
@sandro:supersandro.deSandro 🐧
In reply to @piegames:matrix.org
Feel free https://md.darmstadt.ccc.de/ErrTQBYpRICwMCzvvo16Yw
I added my favorite story when I broke Darwin stdenv by enabling brotli support in curl
14:45:53
@cole-h:matrix.orgcole-hCan anyone help with this weird ofborg issue? It seems to be losing connections to some of the aarch64 builders. I've written down what I've seen at https://github.com/NixOS/ofborg/issues/58119:11:38
@piegames:matrix.orgpiegamesDoes the code register a panic handler? I strongly recommend this for multi-threaded applications where it is desired that one panicking thread ends the whole application19:13:18
@piegames:matrix.orgpiegames * Does the OfBorg code use a panic handler somehow? I strongly recommend this for multi-threaded applications where it is desired that one panicking thread ends the whole application19:13:38
@grahamc:nixos.org@grahamc:nixos.org I don't think so 19:22:14
@grahamc:nixos.org@grahamc:nixos.orgbut yeah basically it'd be great if a panic in any thread just bonked the whole thing over19:22:31
@grahamc:nixos.org@grahamc:nixos.orgit looks ilke it isn't actually panicking though19:25:43
28 Oct 2021
@mjolnir:nixos.orgNixOS Moderation Bot banned @blaggacao:matrix.orgDavid Arnold (blaggacao) (<no reason supplied>).20:04:41
2 Nov 2021
@oliver:matrix.nrp-nautilus.iooliver joined the room.19:23:08
@oliver:matrix.nrp-nautilus.iooliver left the room.19:24:23
3 Nov 2021
@cole-h:matrix.orgcole-hI'm currently redeploying all the evaluators to run on ZFS, which should hopefully prevent the recent space issues (inode related) from reappearing. If you see any problems, please ping me directly either here or in any related issues / PRs.17:41:06
@piegames:matrix.orgpiegames set a profile picture.22:01:14
@cole-h:matrix.orgcole-h

Just following up: it is done! All the evaluators are running off of ZFS now!
Here's a little more backstory on the issue and why we decided to move the
machines to ZFS:

Originally, the machines were run off of EXT4. For years, it seems like this has
been fine, but recently, we have been running into issues where the machines
would complain about a lack of free disk space. When we went to go check,
however, it wasn't disk space that was the problem, but available inodes!

EXT4 has a limited amount of inodes (I believe it's somewhere around 4
billion?), and while derivations (e.g. the .drv files themeslves) are small,
they each take up an inode (at least). Although the garbage collector does know
how to "free X amount of data", it doesn't know how to make sure it frees "X
amount of inodes". This lead to the disk having plenty of space, but absolutely
no inodes available.

Enter ZFS: ZFS gives you unlimited inodes (in that it doesn't have a fixed
number of them available) so long as you have the disk space to support it.

Then, in order to actually get the machines running ZFS, I had to do a few
things:

  • Use Equinix Metal's new NixOS support to deploy the instances
  • Set up customdata in order to set up the ZFS pool using the disks exposed to
    the instance
  • Deploy the instances one-by-one in order to verify they worked properly
    and wouldn't fall over if left alone

For the moment, things seem to be working just fine! Fingers crossed, no more
running-out-of-space alerts until we're actually running out of space... That
said, I will still keep my eye on it. Once again, if you notice anything out of
the ordinary, don't be a stranger: ping me (here or on the related issue / PR)!

22:10:27
@cole-h:matrix.orgcole-h *

Just following up: it is done! All the evaluators are running off of ZFS now!
Here's a little more backstory on the issue and why we decided to move the
machines to ZFS:

Originally, the machines were run off of EXT4. For years, it seems like this has
been fine, but recently, we have been running into issues where the machines
would complain about a lack of free disk space. When we went to go check,
however, it wasn't disk space that was the problem, but available inodes!

EXT4 has a limited amount of inodes (I believe it's somewhere around 4
billion?), and while derivations (e.g. the .drv files themeslves) are small,
they each take up an inode (at least). Although the garbage collector does know
how to "free X amount of data", it doesn't know how to make sure it frees "X
amount of inodes". This lead to the disk having plenty of space, but absolutely
no inodes available.

Enter ZFS: ZFS gives you unlimited inodes (in that it doesn't have a fixed
number of them available) so long as you have the disk space to support it.

Then, in order to actually get the machines running ZFS, I had to do a few
things:

  • Use Equinix Metal's new NixOS support to deploy the instances
  • Set up customdata in order to set up the ZFS pool using the disks exposed to
    the instance
  • Deploy the instances one-by-one in order to verify they worked properly
    and wouldn't fall over if left alone

For the moment, things seem to be working just fine! Fingers crossed, no more
running-out-of-space alerts until we're actually running out of space... That
said, I will still keep my eye on it. Once again, if you notice anything out of
the ordinary, don't be a stranger: ping me (here or on the related issue / PR)!

22:10:31
@cole-h:matrix.orgcole-h *

Just following up: it is done! All the evaluators are running off of ZFS now!
Here's a little more backstory on the issue and why we decided to move the
machines to ZFS:

Originally, the machines were run off of EXT4. For years, it seems like this has
been fine, but recently, we have been running into issues where the machines
would complain about a lack of free disk space. When we went to go check,
however, it wasn't disk space that was the problem, but available inodes!

EXT4 has a limited amount of inodes (I believe it's somewhere around 4
billion?), and while derivations (e.g. the .drv files themeslves) are small,
they each take up an inode (at least). Although the garbage collector does know
how to "free X amount of data", it doesn't know how to make sure it frees "X
amount of inodes". This lead to the disk having plenty of space, but absolutely
no inodes available.

Enter ZFS: ZFS gives you unlimited inodes (in that it doesn't have a fixed
number of them available) so long as you have the disk space to support it.

Then, in order to actually get the machines running ZFS, I had to do a few
things:

  • Use Equinix Metal's new NixOS support to deploy the instances
  • Set up customdata in order to set up the ZFS pool using the disks exposed to
    the instance
  • Deploy the instances one-by-one in order to verify they worked properly
    and wouldn't fall over if left alone

For the moment, things seem to be working just fine! Fingers crossed, no more
running-out-of-space alerts until we're actually running out of space... That
said, I will still keep my eye on it. Once again, if you notice anything out of
the ordinary, don't be a stranger: ping me (here or on the related issue / PR)!

22:10:40
@cole-h:matrix.orgcole-h...that's a long message22:11:00
@r-burns:matrix.orgRyan Burns joined the room.22:18:25
@jb:vk3.wtfjbedo joined the room.22:50:40
@ryantm:matrix.orgryantm joined the room.23:08:55
@noah:matrix.chatsubo.cafeChurch joined the room.23:14:00
@janne.hess:helsinki-systems.deJanne Heß joined the room.23:32:38
@janne.hess:helsinki-systems.deJanne Heß left the room.23:32:57

Show newer messages


Back to Room ListRoom Version: 6