Nix NodeJS | 209 Members | |
| 60 Servers |
| Sender | Message | Time |
|---|---|---|
| 7 Oct 2023 | ||
In reply to @lily:lily.flowers yep, an excerpt:
| 14:12:49 | |
Could someone do it with RUST_LOG=debug in the environment too? | 14:12:59 | |
In reply to @lily:lily.flowersOkay, if you need any logs I would be happy help | 14:13:19 | |
| Download prefetch-log.txt | 14:14:47 | |
That's the output from RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/upd/prefetch-npm-deps-deps#prefetch-npm-deps package-lock.json &> prefetch-log.txt | 14:15:32 | |
| I'll check it when I get back to my computer in a bit | 14:15:54 | |
| Okay, thank you for the quick handling of the issue. Intermittent network stuff is always a tricky | 14:16:50 | |
| i have the same openssl broken pipe errors in my log | 14:17:53 | |
| I sorta avoided digging in at all for a few weeks, so idk if I'd call that "quick handling" 😅 | 14:18:00 | |
| 8 Oct 2023 | ||
In reply to @c0ba1t:matrix.orgCould you try again with RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/tmp/fix/prefetch-npm-deps-hell#prefetch-npm-deps package-lock.json &> prefetch-log.txt for me? I'm testing to see whether or not curl upgrade is the exact cause | 22:28:27 | |
| Sure, | 22:41:21 | |
| Download prefetch-log.txt | 22:41:22 | |
| That's the first run, same failure. I can run it multiple times, if you want more logs. | 22:42:06 | |
| tl;dr the error wasn't resolved. Here's the tail of the file:
| 22:44:14 | |
In reply to @c0ba1t:matrix.orgNah, one error is fine, thank you! | 22:45:53 | |
Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:48:58 | |
* Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:49:08 | |
| Otherwise my first guess would be that the fetcher overallocates ressources. Maybe it tries to allocate all connections and closes a long running, slow connection without checking if it's in use. This wouldn't happen on a fast connection (where a connection is always free) but might be possible with a slow one | 22:51:28 | |
In reply to @c0ba1t:matrix.orgWell so closing the connection shouldn't even be causing this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:51:36 | |
In reply to @c0ba1t:matrix.org* Well so closing the connection shouldn't even be (directly) the cause of this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:52:04 | |
| Which for why that could ever be the case, seems to be somewhere in between isahc and curl-rust and libcurl. And I'm not sure where, but hopefully it's enough for me to be able to locally poke at the problem now | 22:52:56 | |
| Okay, good luck then. maybe changing to a native client, like ureq, might also be a solution. Though I don't know how much work it would be. | 22:54:22 | |
| HA | 22:57:44 | |
| ureq gaves us more problems | 22:57:48 | |
| We have that before | 22:57:50 | |
| And switched to isahc which fixed those and allowed us to support custom tls certs and stuff | 22:58:07 | |
| * We had that before | 22:58:15 | |
| But, well, I guess that decision is hurting me now... | 22:58:29 | |
| Changing libs is not hard. But also finding why this is being dumb also shouldn't be hard so... | 22:59:10 | |
| Thank you for testing though ❤️ | 23:06:29 | |