OfBorg | 174 Members | |
| Number of builds and evals in queue: <TBD> | 61 Servers |
| Sender | Message | Time |
|---|---|---|
| 13 Apr 2023 | ||
| I think I know what I'm doing this weekend | 08:45:45 | |
| Blocking merge to master if the rebuild amount is to high and bringing ofborg into the hot path might not be the best idea. Calculating the rebuild amount takes a good amount of time, if ofborg is overloaded potentially hours. Also there are not even a handful of people maintaining ofborg and the domain for it recently expired. | 08:56:50 | |
| Well it can't actually block | 08:57:18 | |
| You can still merge even if it's red | 08:57:23 | |
| And it probably shouldn't be red until the rebuild count is known | 08:57:34 | |
| But I think this is the kind of situation where a slow failsafe is better than no failsafe | 08:58:13 | |
| Well, merging before eval checks happen isn't great either. | 08:58:25 | |
| (Though of course, there are cases where you know what you're doing.) | 08:58:59 | |
| https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging | 09:00:18 | |
| Afaik there's no way to block on a specific check | 09:04:35 | |
Download 71f7adbd-f2a4-4444-8b8f-e4896f4c217c.jpeg | 09:07:33 | |
| Well, OfBorg is very conservative with red… | 09:08:16 | |
| Looks like it's possible | 09:09:27 | |
Download 9adc2cdc-11f0-4f1c-8727-6a649db73a66.jpeg | 09:09:30 | |
| I just got an idea: if we only block merge in the presence of a mass rebuild and ignore the absence completely, it would work | 09:46:14 | |
In reply to @7c6f434c:nitro.chatI am talking about this for months if not a year already: failed pipelines should be red instead of grey. If people don't know how to fix it, they should mark the package broken. Also saves hydra resources. | 09:47:02 | |
In reply to @k900:0upti.meyes. we should make more checks red imo but a bit off topic | 09:48:02 | |
| If you make build failures red now, you'll get timeouts red. | 09:49:56 | |
| So? Would that be a bad thing? | 12:11:42 | |
| If things time out we probably want to restart them anyway | 12:11:53 | |
| Staging times out all the time | 12:14:22 | |
| So probably not the best idea | 12:14:27 | |
| so exclude PRs targeting staging? I really want to push for that because most new people think grey can be ignored which it usually cannot especially for new packages. | 13:59:08 | |
| * so exclude PRs targeting staging? I really want to push for that because many new people think grey can be ignored which it usually cannot especially for new packages. | 13:59:20 | |
| Chromium update will timeout on its own | 15:26:08 | |
| I also occasionally get ofborg timeouts for larger packages or if some big rebuilds (e.g. nodejs) are on master (generally just for darwin builders though -- usually linux builders don't suffer from build timeouts unless it's to staging or something is wrong). Could ofborg differentiate between timeout/failure and set timeout to neutral and propagate the failure red? | 15:30:01 | |
| (I think massive rebuilds to master should be red regardless, but I do wish build failures were also more obvious to others with ofborg) | 15:31:07 | |
| Well, some large single-package updates that will timeout on their own are master-targeted because they are too often security updates. (The list of transient failures to keep gray so that people stop caring about red is a complicated question; are ENOSPC rare enough?) | 15:53:11 | |
In reply to @sandro:supersandro.de we could comment if https://docs.github.com/en/actions/managing-issues-and-pull-requests/commenting-on-an-issue-when-a-label-is-added | 17:50:13 | |
| Oh that's probably easier than hacking this into ofborg | 17:52:49 | |