!PbcQeaWcgMyjVfeGQN:nixos.org

Nix Mozilla 🦊🐦🐒

156 Members
A room about a number of weird animals (also known as Mozilla products): Firefox, Thunderbird, Spidermonkey, NSS, cacert. Also a little bit of fun times, small amounts of extreme, when building weird animals. But for bugs please file GitHub issues. | Release Schedule: https://whattrainisitnow.com | Crash-Stats: https://crash-stats.mozilla.org/search/?distribution_id=%3Dnixos&product=Firefox&product=Thunderbird48 Servers

Load older messages


SenderMessageTime
4 Sep 2023
@hexa:lossy.networkhexacould be the host, or the test runner itself13:08:24
@nbp:mozilla.orgnbp maybe lsof would help tell them apart. 13:10:25
@nbp:mozilla.orgnbpchange the test case to include the output of lsof command.13:11:06
@hexa:lossy.networkhexaa quick sampling with pustil reveals that qemu_kvm holds too many fds13:34:50
@hexa:lossy.networkhexa
vm-test-run-firefox-unwrapped> (finished: waiting for the X11 server, in 17.94 seconds)
vm-test-run-firefox-unwrapped> machine: bash=4
vm-test-run-firefox-unwrapped> machine: .nixos-test-dri=13
vm-test-run-firefox-unwrapped> machine: vde_switch=6
vm-test-run-firefox-unwrapped> machine: qemu-kvm=551
13:34:55
@hexa:lossy.networkhexa
vm-test-run-firefox-unwrapped> machine: bash=4
vm-test-run-firefox-unwrapped> machine: .nixos-test-dri=13
vm-test-run-firefox-unwrapped> machine: vde_switch=6
vm-test-run-firefox-unwrapped> machine: qemu-kvm=2006
vm-test-run-firefox-unwrapped> subtest: Check whether Firefox can play sound
13:35:07
@hexa:lossy.networkhexato me that makes it hydra's fault for constraining build jobs like that13:36:03
@k900:0upti.meK900But why would it do that on Hydra and not on other systems13:37:01
@hexa:lossy.networkhexayeah, the open question13:37:26
@hexa:lossy.networkhexa ajs124: maybe something hydra does? 13:38:01
@ajs124:ajs124.deajs124don't think that's a hydra thing. more like some strange config on the hydra build nodes.13:38:56
@hexa:lossy.networkhexayeah, trying to find that config as we speak13:39:19
@hexa:lossy.networkhexaI think we're using https://github.com/DeterminateSystems/nix-netboot-serve to serve netboot images13:40:15
@hexa:lossy.networkhexaruns on eris apparently13:40:46
@hexa:lossy.networkhexawondering if our runner configs are private?14:03:42
@hexa:lossy.networkhexaor state on eris even14:03:45
@hexa:lossy.networkhexathe nix-netboot-serve configures is too minimal14:05:49
@hexa:lossy.networkhexahttps://github.com/NixOS/equinix-metal-builders/blob/main/modules/nix.nix#L3414:22:06
@hexa:lossy.networkhexathere is a hard fdlimit on the nix-daemon14:22:18
@vcunat:matrix.orgvcunatA million (per process) sounds quite a lot.14:42:54
@vcunat:matrix.orgvcunatUnless some bad leak happens. Maybe it's more likely that it's stuck on a low soft limit or that it doesn't propagate as we'd expect.14:43:44
@nbp:mozilla.orgnbpI wish we could have a wireguard-boot, where one image would connect using wireguard to download its latest image. This way we could make it work without having to redo the DHCP of the network.15:11:38
@hexa:lossy.networkhexa

Nowadays, the hard limit defaults to 524288, a very high value compared to historical defaults. Typically applications should increase their soft limit to the hard limit on their own, if they are OK with working with file descriptors above 1023, i.e. do not use select(2).

15:12:14
@hexa:lossy.networkhexaI think knowing what number of open fds we're having on the builders would be an easy first step15:21:40
@k900:0upti.meK900
In reply to @vcunat:matrix.org
A million (per process) sounds quite a lot.
It's not per process though
15:22:43
@k900:0upti.meK900It's per cgroup15:22:46
@k900:0upti.meK900And everything is in the cgroup15:22:53
@vcunat:matrix.orgvcunatCan you point me to docs about that?15:27:52
@k900:0upti.meK900Uh, it's in systemd docs somewhere 15:28:11
@vcunat:matrix.orgvcunatI really thought that setrlimit is per-process and I can't quickly find a reference.15:28:15

Show newer messages


Back to Room ListRoom Version: 9