| 8 Oct 2023 |
Cobalt | Sure, | 22:41:21 |
Cobalt | Download prefetch-log.txt | 22:41:22 |
Cobalt | That's the first run, same failure. I can run it multiple times, if you want more logs. | 22:42:06 |
Cobalt | tl;dr the error wasn't resolved. Here's the tail of the file:
[2023-10-08T22:40:40Z DEBUG isahc::handler] h2 [accept-encoding: deflate, gzip, br, zstd]
[2023-10-08T22:40:40Z DEBUG isahc::handler] h2 [user-agent: curl/8.2.1 isahc/1.7.2]
[2023-10-08T22:40:40Z DEBUG isahc::handler] Using Stream ID: 15
[2023-10-08T22:40:40Z DEBUG isahc::handler] Connection cache is full, closing the oldest one
[2023-10-08T22:40:40Z DEBUG isahc::handler] Closing connection
[2023-10-08T22:40:40Z DEBUG isahc::handler] Connection #1 to host registry.npmjs.org left intact
Error: unknown error
Caused by:
[55] Failed sending data to the peer
| 22:44:14 |
Lily Foster | In reply to @c0ba1t:matrix.org That's the first run, same failure. I can run it multiple times, if you want more logs. Nah, one error is fine, thank you! | 22:45:53 |
Cobalt | Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:48:58 |
Cobalt | * Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? | 22:49:08 |
Cobalt | Otherwise my first guess would be that the fetcher overallocates ressources. Maybe it tries to allocate all connections and closes a long running, slow connection without checking if it's in use. This wouldn't happen on a fast connection (where a connection is always free) but might be possible with a slow one | 22:51:28 |
Lily Foster | In reply to @c0ba1t:matrix.org Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? Well so closing the connection shouldn't even be causing this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:51:36 |
Lily Foster | In reply to @c0ba1t:matrix.org Okay, I'll be off for today (it's 1 am in my tz). The error for the above branch is reproducible after every Connection cache is full, closing the oldest one message. Could there be a race condition between the connection cache being exhausted and accidentally closing an in-use connection? * Well so closing the connection shouldn't even be (directly) the cause of this because it's not doing the retry logic because it doesn't think what is clearly a network error (error code 55) is one and instead treats it as a permanent rather than transient error | 22:52:04 |
Lily Foster | Which for why that could ever be the case, seems to be somewhere in between isahc and curl-rust and libcurl. And I'm not sure where, but hopefully it's enough for me to be able to locally poke at the problem now | 22:52:56 |
Cobalt | Okay, good luck then. maybe changing to a native client, like ureq, might also be a solution. Though I don't know how much work it would be. | 22:54:22 |
Lily Foster | HA | 22:57:44 |
Lily Foster | ureq gaves us more problems | 22:57:48 |
Lily Foster | We have that before | 22:57:50 |
Lily Foster | And switched to isahc which fixed those and allowed us to support custom tls certs and stuff | 22:58:07 |
Lily Foster | * We had that before | 22:58:15 |
Lily Foster | But, well, I guess that decision is hurting me now... | 22:58:29 |
Lily Foster | Changing libs is not hard. But also finding why this is being dumb also shouldn't be hard so... | 22:59:10 |
Lily Foster | Thank you for testing though ❤️ | 23:06:29 |
Lily Foster | I'll probably have a new one for people to test by tomorrow | 23:06:36 |
Cobalt | In reply to @lily:lily.flowers Thank you for testing though ❤️ Your welcome, rather thank you for maintaining this module. It looks like a pain to handle node stuff. | 23:16:12 |
| 9 Oct 2023 |
raitobezarius | I guess you can do fault injection for reproducing the error | 00:38:01 |
Lily Foster | In reply to @raitobezarius:matrix.org I guess you can do fault injection for reproducing the error That was my thought with reproducing locally, yeah. I just haven't bothered setting that up yet... also why are you still up it's like 3am there 🙈 | 00:42:43 |
raitobezarius | In reply to @lily:lily.flowers That was my thought with reproducing locally, yeah. I just haven't bothered setting that up yet... also why are you still up it's like 3am there 🙈 i was playing World of Warcraft of course | 00:46:12 |
Lily Foster | In reply to @raitobezarius:matrix.org i was playing World of Warcraft of course Ah, well, please get sleep at some point xD | 00:46:55 |
Lily Foster | In reply to @c0ba1t:matrix.org Your welcome, rather thank you for maintaining this module. It looks like a pain to handle node stuff. Can you try again, same branch? | 14:41:36 |
Lily Foster | And send logs. I added a debug println | 14:41:53 |
Lily Foster | Because the more I look into this, the more cursed it gets | 14:42:10 |
Lily Foster | And the less sense it makes. And I can't get the same thing to happen on my system | 14:42:18 |