!RROtHmAaQIkiJzJZZE:nixos.org

NixOS Infrastructure

388 Members
Next Infra call: 2024-07-11, 18:00 CEST (UTC+2) | Infra operational issues backlog: https://github.com/orgs/NixOS/projects/52 | See #infra-alerts:nixos.org for real time alerts from Prometheus.119 Servers

Load older messages


SenderMessageTime
3 Jan 2025
@hexa:lossy.networkhexaand whatever is in hydra's queeu19:24:17
@hexa:lossy.networkhexa * and whatever is in hydra's queue19:24:22
4 Jan 2025
@emma:rory.gay@emma:rory.gayyoure still actively maintaining 32bit packages?05:15:17
@vcunat:matrix.orgVladimír ČunátWhen looking at the graph of Hydra's steps completed per minute, I believe I see a gradual decrease after each restart of the queue-runner service. Interesting. https://grafana.nixos.org/d/MJw9PcAiz/hydra-jobs?orgId=1&from=2024-12-18T15:49:39.948Z&to=2025-01-04T08:18:21.662Z&refresh=1m&var-machine=$__all&viewPanel=panel-2008:21:27
@hexa:lossy.networkhexayeah, that is some spike in runnables for once15:49:46
@hexa:lossy.networkhexa22 slots waiting to receive17:50:22
@hexa:lossy.networkhexa144 receiving17:50:31
@hexa:lossy.networkhexaso the build capacity can saturate the compress slots still17:50:45
@hexa:lossy.networkhexa so we might still want to look into zstd 🙂 17:51:42
@vcunat:matrix.orgVladimír ČunátLooks like pretty rare occasions so far. And PSI cpusome wasn't even high.18:05:14
@vcunat:matrix.orgVladimír Čunát* Looks like pretty rare occasions so far. And PSI cpusome wasn't even high. (something like 10%)18:05:44
@vcunat:matrix.orgVladimír ČunátSo increasing threads might still help, too, though I'm not sure why.18:06:23
@hexa:lossy.networkhexathe compressor are often starved waiting on data from the builders18:20:31
@hexa:lossy.networkhexa * the compressors are often starved waiting on data from the builders18:21:29
5 Jan 2025
@hexa:lossy.networkhexacontemplating banning certain user agents and making people who scrape hydra set an explicit one03:07:10
@hexa:lossy.networkhexalooking at curl, wget and most of all scrapy03:07:20
@hexa:lossy.networkhexawhen hydra-server gets busy we don't get any metrics any more from it03:07:47
@adam:robins.wtf@adam:robins.wtf Is it possible to put Hydra behind the Fastly cache 11:58:08
@adam:robins.wtf@adam:robins.wtf Would that help here? 11:58:26
@emilazy:matrix.orgemilymany pages seem too dynamic for that?14:03:07
@emilazy:matrix.orgemily(the expensive ones, I'd assume)14:03:12
@adam:robins.wtf@adam:robins.wtfyeah i guess it depends on what they're scraping14:55:50
@k900:0upti.meK900They're not scraping anything 14:57:51
@adam:robins.wtf@adam:robins.wtfthen what is happening? because hexa said "people who scrape hydra"15:01:08
@hexa:lossy.networkhexathere are gaps in our graphs on prometheus, and when that happens I also can't reach h.n.o.15:03:13
@hexa:lossy.networkhexaI browse the access.log, and yes, there are some high frequency scrapers in there15:03:33
@hexa:lossy.networkhexawe could probably evaluate access logs besser15:03:48
@hexa:lossy.networkhexa
Hits      h% Vis.     v% Tx. Amount Data
18111 20.20%    4  0.05% 763.06 MiB 2a01:4f9:3070:15e0::1  (pluto.nixos.org)
16250 18.13%    1  0.01%   1.69 GiB 99.245. (random rogers customer)
 4059  4.53%    1  0.01%   1.91 MiB 34.44 (google cloud)
 2683  2.99%    2  0.02%   2.00 MiB 81.200
15:06:18
@hexa:lossy.networkhexathis is the last 75.5h15:07:32
@hexa:lossy.networkhexaestimated from the prometheus scraper, who runs every 15s15:07:51

Show newer messages


Back to Room ListRoom Version: 6