| 9 May 2026 |
hexa | no entries | 19:47:42 |
hexa | 5 tries iirc | 19:48:24 |
emily | seems like we should just make that infinite? | 19:50:46 |
emily | if you give up on uploading and rebuild instead, you then still have to upload the output, so no benefit | 19:50:58 |
emily | ok, it's actually ffmpeg-headless on unstable, last rebuilt 2026-04-23 | 19:53:53 |
emily | are these logs from the old or new queue runner? having trouble chasing code paths | 20:00:54 |
hexa | old | 20:01:29 |
hexa | the new runner isn't live | 20:01:36 |
emily | ok, so it looks like Nix will retry up to download-attempts (even for uploads) times, unless it gets status 400–500 other than 408, 501, 505, or 511 | 20:18:09 |
hexa | and I assume you only got that from code and not docs | 20:18:42 |
emily | would it be feasible to set download-attempts = 1024 or something like that on the Nix used by Hydra? | 20:18:48 |
hexa | that would be hack, right? | 20:19:00 |
emily | yes I had to bounce between multiple repositories 🫠 | 20:19:02 |
emily | well it seems reasonable to say that Hydra giving up on an upload just never makes sense | 20:19:18 |
hexa | I kinda disagree | 20:19:27 |
emily | if it gives up on uploading something to the cache, then it's just going to schedule a pointless build for it later, and then try to upload that | 20:19:29 |
hexa | that part is true | 20:19:44 |
emily | which is exactly the same as continuing to try to upload, except that you do a pointless build which happens to also break things on Darwin | 20:19:46 |
hexa | but I also don't want an extended backlog of uploads ideally | 20:19:55 |
emily | right, but they'll happen anyway right? | 20:20:11 |
hexa | we can increase the retry amounts | 20:20:12 |
emily | they're ultimately part of the jobset | 20:20:20 |
hexa | except when the ydon't | 20:20:22 |
emily | I guess the difference is it can give up on leafs | 20:20:26 |
hexa | huh | 20:20:28 |
hexa | they? | 20:20:36 |
emily | the things being uploaded | 20:20:48 |
hexa | right | 20:20:52 |
emily | I think a nicer solution is ^ where you just never push out a .narinfo for any output until all the outputs are up | 20:21:13 |
emily | but looking at the C++ code it doesn't seem like that would be trivial to arrange if S3 can even do it | 20:21:29 |