| 11 Aug 2025 |
emily | yeah ok no it's working with your patch | 15:37:33 |
emily | and not timing out | 15:37:38 |
emily | I don't understand why | 15:37:44 |
emily | yes I do | 15:38:17 |
emily | or at least I have a very good guess | 15:38:19 |
emily | [builder@virby-vm:~]$ ping google.com
PING google.com (142.250.140.113) 56(84) bytes of data.
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=1 ttl=114 time=10.1 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=2 ttl=114 time=9.36 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=3 ttl=114 time=9.04 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=4 ttl=114 time=9.57 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=5 ttl=114 time=9.75 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=6 ttl=114 time=9.29 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=7 ttl=114 time=9.16 ms
64 bytes from wj-in-f113.1e100.net (142.250.140.113): icmp_seq=8 ttl=114 time=10.5 ms
^C
--- google.com ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7033ms
rtt min/avg/max/mdev = 9.043/9.599/10.544/0.475 ms
[builder@virby-vm:~]$ ping ipv6.google.com
PING ipv6.google.com (2a00:1450:4009:81e::200e) 56 data bytes
| 15:38:27 |
emily | I bet your changes help it fall back to v4 | 15:38:37 |
Toma | no idea | 15:38:56 |
Toma | the main change was actually specifying timeouts via with session.get(url, stream=True, timeout=(CONNECT_TIMEOUT, **READ_TIMEOUT)) | 15:39:15 |
Toma | and also changing the task cancellation logic | 15:39:24 |
Toma | but if it's not even timing out, then IDK | 15:39:34 |
emily | my guess is that the v4 timeout fails and that causes an internal fallback to v6 | 15:39:49 |
emily | but I haven't read requests code | 15:40:00 |
emily | I agree that async is probably a good idea | 15:40:09 |
Toma | and by async you mean? | 15:40:55 |
K900 | asyncio | 15:41:05 |
K900 | Instead of manual threading | 15:41:10 |
Toma | how big is asyncio's closure? | 15:41:35 |
Toma | og its part of python | 15:42:00 |
Toma | * oh its part of python | 15:42:04 |
Toma | never touched async python and wasn't really planning to | 15:42:37 |
K900 | You'll probably also want httpx as an async HTTP client library | 15:42:39 |
Toma | maybe, I don't know | 15:42:50 |
Toma | If someone wants to improve it, go ahead, I don't really write python that much | 15:43:25 |
emily | could RIIR :P | 15:47:18 |
Toma | the vendoring of the deps of the rust implementation would be interesting... | 15:49:42 |
Toma | I guess we could use importCargoLock | 15:49:55 |
Toma | but I don't think we should focus on that, we have more pressing issues, I think, e.g. lessening the cache burden, duplicated deps | 15:51:48 |
Toma | Also, interesting sidenote:
I encountered this around a month ago: https://github.com/flatpak/flatpak-builder-tools/blob/master/cargo/flatpak-cargo-generator.py
Flatpak also has their own custom vendoring script... | 15:53:28 |
| 12 Aug 2025 |
emily | is it possible to build rustc with only the Cranelift backend, not LLVM? | 15:46:28 |