| 28 Aug 2021 |
hexa | so very true 😀 | 16:07:52 |
| 30 Aug 2021 |
andi- | Is it possible to keep the scope of the next staging-next run smaller and include https://github.com/NixOS/nixpkgs/pull/131618 instead? It is already a comparatively big change (on the risk side) and I'd like to avoid having to debug binutils, glibc, ... issues if things go wrong while already debugging systemd. | 13:15:23 |
hexa | I don't mind, then we skip staging->staging-next once and you'd retarget on staging-next with base branch master | 13:17:10 |
hexa | * I don't mind, then we skip staging->staging-next once and you'd retarget on staging-next with base branch master. Also running v249 locally for 48h and didn't notice any obvious breakage 👍️ | 13:18:02 |
andi- | I'm currently testing the PR rebased on master (with a hydra jobset) so ideally the amount of (additional) rebuilds will be very small anyway. | 13:22:13 |
Vladimír Čunát | Maybe the systemd jobset should not have more shares than "trunk-combined"? | 14:44:11 |
Vladimír Čunát | * Maybe the systemd jobset should not have more build shares than "trunk-combined"? | 14:44:19 |
Vladimír Čunát | * Maybe the systemd jobset should not have (many times) more build shares than "trunk-combined"? | 14:44:26 |
Vladimír Čunát | I lowered them now. nixos-unstable is still waiting for the first bump with new openssl. | 15:09:36 |
Vladimír Čunát | (and I cancelled all but the latest builds in the systemd jobset) | 15:29:33 |
hexa | In reply to @vcunat:matrix.org Maybe the systemd jobset should not have (many times) more build shares than "trunk-combined"? which is kinda funny, given the shares of the haskell-updaes job | 19:16:09 |
hexa | In reply to @vcunat:matrix.org Maybe the systemd jobset should not have (many times) more build shares than "trunk-combined"? * which is kinda funny, given the amount of shares allocated to the haskell-updates job | 19:16:20 |
Vladimír Čunát | My understanding (from many years past) is that it was intentional there. | 19:16:59 |
Vladimír Čunát | It was meant that the haskell updates get iterated extremely quickly. | 19:17:18 |
hexa | yup, they eval 3-4 times a day currently. https://hydra.nixos.org/jobset/nixpkgs/haskell-updates#tabs-evaluations | 19:17:40 |
Vladimír Čunát | (i.e. the jobset just gets checked and merged within a couple dayes) | 19:17:43 |
Vladimír Čunát | * (i.e. the jobset just gets checked and merged within a couple days) | 19:17:50 |
Vladimír Čunát | Ah, well, I didn't mean this iteration speed :-) Maybe it now consumes quite a fraction of Hydra's resources. | 19:18:21 |
hexa | pretty sure the cycle is bi-weekly now | 19:18:27 |
sterni | it's at least bi-weekly, it really depends on how much regressions there are to fix | 19:18:46 |
sterni | i. e. I have merged the branch within a three days a couple of times before | 19:19:07 |
Vladimír Čunát | When you have multiple mass rebuilds, it doesn't make sense for any pair to have similar amount of shares, especially if they target the same branch and their combination creates yet another mass rebuild. You basically want a priority queue instead. | 19:19:39 |
Vladimír Čunát | (at least as long as the rebuild resources are relatively scarce) | 19:20:01 |
Vladimír Čunát | Now of course the contention is who gets more priority :-) | 19:20:30 |
sterni | in my experience the factors scheduled job count and time they have been scheduled for is more relevant than scheduling shares anyways | 19:20:36 |
sterni | haskell-updates quite often gets into a situation where nothing is built for days even though it probably has the highest scheduling shares on hydra atm | 19:21:09 |
Vladimír Čunát | Ah, yes... I've heard that already. And I also noticed myself that sometimes the scheduler appears to act weird. | 19:22:07 |
hexa | maybe x86_64-darwin related? 🤔 | 19:22:08 |
sterni | as far as I understand it Hydra tries to balance build time so at some point if you have built to much you are just getting nothing anymore | 19:22:17 |
sterni | and yeah we often get stuck on x86_64-darwin, but also aarch64-linux sometimes not sure what that is about | 19:22:41 |