| 7 Oct 2023 |
Cobalt | In reply to @lily:lily.flowers No need, I'm trying to get failure logs Okay, if you need any logs I would be happy help | 14:13:19 |
Cobalt | Download prefetch-log.txt | 14:14:47 |
Cobalt | That's the output from RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/upd/prefetch-npm-deps-deps#prefetch-npm-deps package-lock.json &> prefetch-log.txt | 14:15:32 |
Lily Foster | I'll check it when I get back to my computer in a bit | 14:15:54 |
Cobalt | Okay, thank you for the quick handling of the issue. Intermittent network stuff is always a tricky | 14:16:50 |
Marie | i have the same openssl broken pipe errors in my log | 14:17:53 |
Lily Foster | I sorta avoided digging in at all for a few weeks, so idk if I'd call that "quick handling" 😅 | 14:18:00 |
| 8 Oct 2023 |
Lily Foster | In reply to @c0ba1t:matrix.org That's the output from RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/upd/prefetch-npm-deps-deps#prefetch-npm-deps package-lock.json &> prefetch-log.txt Could you try again with RUST_LOG=debug nix run github:lilyinstarlight/nixpkgs/tmp/fix/prefetch-npm-deps-hell#prefetch-npm-deps package-lock.json &> prefetch-log.txt for me? I'm testing to see whether or not curl upgrade is the exact cause | 22:28:27 |
Cobalt | Sure, | 22:41:21 |
Cobalt | Download prefetch-log.txt | 22:41:22 |
Cobalt | That's the first run, same failure. I can run it multiple times, if you want more logs. | 22:42:06 |
Cobalt | tl;dr the error wasn't resolved. Here's the tail of the file:
[2023-10-08T22:40:40Z DEBUG isahc::handler] h2 [accept-encoding: deflate, gzip, br, zstd]
[2023-10-08T22:40:40Z DEBUG isahc::handler] h2 [user-agent: curl/8.2.1 isahc/1.7.2]
[2023-10-08T22:40:40Z DEBUG isahc::handler] Using Stream ID: 15
[2023-10-08T22:40:40Z DEBUG isahc::handler] Connection cache is full, closing the oldest one
[2023-10-08T22:40:40Z DEBUG isahc::handler] Closing connection
[2023-10-08T22:40:40Z DEBUG isahc::handler] Connection #1 to host registry.npmjs.org left intact
Error: unknown error
Caused by:
[55] Failed sending data to the peer
| 22:44:14 |
Lily Foster | In reply to @c0ba1t:matrix.org That's the first run, same failure. I can run it multiple times, if you want more logs. Nah, one error is fine, thank you! | 22:45:53 |
Cobalt | Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:48:58 |
Cobalt | * Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:49:08 |
Cobalt | Otherwise my first guess would be that the fetcher overallocates ressources. Maybe it tries to allocate all connections and closes a long running, slow connection without checking if it's in use. This wouldn't happen on a fast connection (where a connection is always free) but might be possible with a slow one | 22:51:28 |
Lily Foster | In reply to @c0ba1t:matrix.org Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? Well so closing the connection shouldn't even be causing this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:51:36 |
Lily Foster | In reply to @c0ba1t:matrix.org Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? * Well so closing the connection shouldn't even be (directly) the cause of this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:52:04 |
Lily Foster | Which for why that could ever be the case, seems to be somewhere in between isahc and curl-rust and libcurl. And I'm not sure where, but hopefully it's enough for me to be able to locally poke at the problem now | 22:52:56 |
Cobalt | Okay, good luck then. maybe changing to a native client, like ureq, might also be a solution. Though I don't know how much work it would be. | 22:54:22 |
Lily Foster | HA | 22:57:44 |
Lily Foster | ureq gaves us more problems | 22:57:48 |
Lily Foster | We have that before | 22:57:50 |
Lily Foster | And switched to isahc which fixed those and allowed us to support custom tls certs and stuff | 22:58:07 |
Lily Foster | * We had that before | 22:58:15 |
Lily Foster | But, well, I guess that decision is hurting me now... | 22:58:29 |
Lily Foster | Changing libs is not hard. But also finding why this is being dumb also shouldn't be hard so... | 22:59:10 |
Lily Foster | Thank you for testing though ❤️ | 23:06:29 |
Lily Foster | I'll probably have a new one for people to test by tomorrow | 23:06:36 |
Cobalt | In reply to @lily:lily.flowers Thank you for testing though ❤️ Your welcome, rather thank you for maintaining this module. It looks like a pain to handle node stuff. | 23:16:12 |