| 11 Aug 2025 |
Ben Sparks | I wouldn't rely on this timespan btw until I find further proof of when the issues occurred.
I also never found out why this happened; it just "went away" at some point | 15:19:20 |
Toma | emily: could you try with the changes in this PR: https://github.com/NixOS/nixpkgs/pull/400865
(while testing if you only build cargoDeps.vendorStaging you won't have many rebuilds)
| 15:33:57 |
Toma | Currently the timeout logic in fetchCargoVendor is pretty bad (not even sure if it's working properly) and this tries to fix it, while also doing some general improvements (at least it tries to) | 15:34:55 |
Toma | * Currently the timeout logic in fetchCargoVendor is pretty bad (not even sure if it's working properly) and this tries to fix it, while also doing some general improvements | 15:35:05 |
emily | unfortunately I rebooted the VM and it fixed it 😅 | 15:35:45 |
emily | but I will try this next time | 15:35:47 |
K900 | Any reason we're not just doing async at this point | 15:35:52 |
K900 | Feels like it'll make things significantly less painful | 15:35:59 |
emily | oh wait | 15:36:04 |
emily | it worked after applying your patch and I assumed that was because I rebooted | 15:36:16 |
emily | but it fails again if I revert your patch | 15:36:20 |
emily | well, hangs again | 15:36:25 |
emily | uh | 15:36:42 |
emily | ok, but then I applied your patch again and it's back to failing | 15:36:47 |
emily | so I guess it just worked once and then I ^C'd half-way through and tried another build and it re-broke :) | 15:36:56 |
Toma | hell yeah i love consistency | 15:37:06 |
emily | I'm doubting this is a bug in your code rather than a VM/passta issue | 15:37:10 |
emily | wait no | 15:37:24 |
Toma | in any case, the timeout logic is better with my PR | 15:37:31 |
emily | yeah ok no it's working with your patch | 15:37:33 |
emily | and not timing out | 15:37:38 |
emily | I don't understand why | 15:37:44 |
emily | yes I do | 15:38:17 |
emily | or at least I have a very good guess | 15:38:19 |
emily | [builder@virby-vm:~]$ ping google.com
PING google.com (142.250.140.113) 56(84) bytes of data.
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=1 ttl=114 time=10.1 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=2 ttl=114 time=9.36 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=3 ttl=114 time=9.04 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=4 ttl=114 time=9.57 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=5 ttl=114 time=9.75 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=6 ttl=114 time=9.29 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=7 ttl=114 time=9.16 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=8 ttl=114 time=10.5 ms
^C
--- google.com ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7033ms
rtt min/avg/max/mdev = 9.043/9.599/10.544/0.475 ms
[builder@virby-vm:~]$ ping ipv6.google.com
PING ipv6.google.com (2a00:1450:4009:81e::200e) 56 data bytes
| 15:38:27 |
emily | I bet your changes help it fall back to v4 | 15:38:37 |
Toma | no idea | 15:38:56 |
Toma | the main change was actually specifying timeouts via with session.get(url, stream=True, timeout=(CONNECT_TIMEOUT, **READ_TIMEOUT)) | 15:39:15 |
Toma | and also changing the task cancellation logic | 15:39:24 |
Toma | but if it's not even timing out, then IDK | 15:39:34 |