Nix NodeJS | 210 Members | |
| 61 Servers |
| Sender | Message | Time |
|---|---|---|
| 6 Oct 2023 | ||
| I never got this bug and I use quite often the npm-prefetch-deps thingie… | 20:55:01 | |
| On a shared machine on top of that where there's like 6TB of nix store… | 20:55:12 | |
| (but it's a 1Gbps connection) | 20:55:22 | |
In reply to @lily:lily.flowersSure, though that it might be delayed until tomorrow. | 20:56:39 | |
| 7 Oct 2023 | ||
| I tried it on a nixos vm and as soon as i raised the cpu core count the issue started to appear | 12:32:43 | |
In reply to @marie:marie.cologneI'll try to make a branch to confirm whether or not the curl bump is in fact causing this | 13:53:53 | |
In reply to @lily:lily.flowersUnfortunately it threw a [18] Transferred a partial file within 10-20 seconds. One time though it got a bit further but stopped with a [55] Failed sending data to the peer.Host: NixOS 23.05, on a (congested) 50Mb/s connection, project with a few hundred deps | 14:10:43 | |
| I haven't tried it with lower cores yet | 14:11:11 | |
| * I haven't tried it with a lower core count yet | 14:11:19 | |
In reply to @c0ba1t:matrix.orgStill says "unknown error"? | 14:12:13 | |
In reply to @c0ba1t:matrix.orgNo need, I'm trying to get failure logs | 14:12:35 | |
In reply to @lily:lily.flowers yep, an excerpt:
| 14:12:49 | |
Could someone do it with RUST_LOG=debug in the environment too? | 14:12:59 | |
In reply to @lily:lily.flowersOkay, if you need any logs I would be happy help | 14:13:19 | |
| Download prefetch-log.txt | 14:14:47 | |
That's the output from RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/upd/prefetch-npm-deps-deps#prefetch-npm-deps package-lock.json &> prefetch-log.txt | 14:15:32 | |
| I'll check it when I get back to my computer in a bit | 14:15:54 | |
| Okay, thank you for the quick handling of the issue. Intermittent network stuff is always a tricky | 14:16:50 | |
| i have the same openssl broken pipe errors in my log | 14:17:53 | |
| I sorta avoided digging in at all for a few weeks, so idk if I'd call that "quick handling" 😅 | 14:18:00 | |
| 8 Oct 2023 | ||
In reply to @c0ba1t:matrix.orgCould you try again with RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/tmp/fix/prefetch-npm-deps-hell#prefetch-npm-deps package-lock.json &> prefetch-log.txt for me? I'm testing to see whether or not curl upgrade is the exact cause | 22:28:27 | |
| Sure, | 22:41:21 | |
| Download prefetch-log.txt | 22:41:22 | |
| That's the first run, same failure. I can run it multiple times, if you want more logs. | 22:42:06 | |
| tl;dr the error wasn't resolved. Here's the tail of the file:
| 22:44:14 | |
In reply to @c0ba1t:matrix.orgNah, one error is fine, thank you! | 22:45:53 | |
Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:48:58 | |
* Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:49:08 | |
| Otherwise my first guess would be that the fetcher overallocates ressources. Maybe it tries to allocate all connections and closes a long running, slow connection without checking if it's in use. This wouldn't happen on a fast connection (where a connection is always free) but might be possible with a slow one | 22:51:28 | |
In reply to @c0ba1t:matrix.orgWell so closing the connection shouldn't even be causing this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:51:36 | |