| 25 May 2021 |
maralorn | And I mean not only philosophically or the binary. But the actual nix closure of the output. | 15:40:18 |
sterni (he/him) | yeah the closure is one path for pkgsStatic | 15:41:05 |
sterni (he/him) | at least in normal cases | 15:41:10 |
sterni (he/him) | or at least it should :p | 15:41:19 |
sterni (he/him) | but honestly it's not that interesting unless you are copying closures around a lot which are all built with different versions of stdenv | 15:41:53 |
maralorn | Cool | 15:41:54 |
sterni (he/him) | normally you have libc etc. in store anyways | 15:42:10 |
maralorn | True | 15:42:20 |
sterni (he/him) | Inviting everyone with restart-jobs to hunt for stale builds :) https://github.com/NixOS/nixpkgs/pull/123682#issuecomment-847975298 | 15:42:50 |
sterni (he/him) | unfortunately I don't remember what other packages had a build failure due to Killed by $thing | 15:43:07 |
maralorn | sterni (he/him): I have been restarting jobs for days. But it doesn't really matter. We only mark builds that fail themselves as broken. So the worst thing that could happen is that they are truly broken without us noticing. But when all maintained jobs are fine, I think we are fine. | 15:47:20 |
sterni (he/him) | yeah the impact is low with our new approach at least | 15:47:45 |
sterni (he/him) | I mean we could also hope a merge of master triggers a full rebuild and everything goes well this time :p | 15:48:01 |
sterni (he/him) | but a bit scared in light of how slow the darwin builds where | 15:48:19 |
sterni (he/him) | * but a bit scared in light of how slow the darwin builds were | 15:48:24 |
sterni (he/him) | https://docs.google.com/spreadsheets/d/1ZvqZOdOse1lIAJxccsWdyFNeDLyVmoCUvI12LJNFMks | 15:49:13 |
maralorn | In reply to @maralorn:maralorn.de sterni (he/him): I have been restarting jobs for days. But it doesn't really matter. We only mark builds that fail themselves as broken. So the worst thing that could happen is that they are truly broken without us noticing. But when all maintained jobs are fine, I think we are fine. In the past I have restarted all failed jobs once or twice, but that's quite a shotgun approach. So now I resorted to picking single jobs to restart. | 15:50:30 |
sterni (he/him) | I see | 15:50:45 |
sterni (he/him) | I'm a bit unsure currently whether I should make an effort to jailbreak some of the failing random 1.2.0 packages | 15:51:15 |
maralorn | A feature in hydra to restart all jobs meeting the currently shown search criteria would be super cool. | 15:51:16 |
sterni (he/him) | I kinda wished for a way to grep through all failing jobs' logs | 15:51:42 |
sterni (he/him) | which would make things like random 1.2.0 or the aarch64 doctest failures less annoying to clean up | 15:52:00 |
maralorn | Uh, that sounds cool and like a very tough ask at the same time. | 15:52:27 |
sterni (he/him) | alternatively to checking build logs and reporting issues upstream we could just hope stackage has its impact and all those packages will just unbreak by themselves in a couple of days | 15:52:39 |
sterni (he/him) | In reply to @maralorn:maralorn.de Uh, that sounds cool and like a very tough ask at the same time. sounds like a script which runs for five minutes | 15:52:56 |
sterni (he/him) | amirite | 15:53:00 |
maralorn | In reply to @sternenseemann:systemli.org sounds like a script which runs for five minutes It feels like another situation where the solution will query hydra a lot. | 16:03:58 |
sterni (he/him) | well no | 16:04:14 |
sterni (he/him) | well yes | 16:04:23 |
sterni (he/him) | you need to run the query which takes 2min | 16:04:29 |