OfBorg | 182 Members | |
| Number of builds and evals in queue: <TBD> | 65 Servers |
| Sender | Message | Time |
|---|---|---|
| 13 Apr 2023 | ||
Download 9adc2cdc-11f0-4f1c-8727-6a649db73a66.jpeg | 09:09:30 | |
| I just got an idea: if we only block merge in the presence of a mass rebuild and ignore the absence completely, it would work | 09:46:14 | |
In reply to @7c6f434c:nitro.chatI am talking about this for months if not a year already: failed pipelines should be red instead of grey. If people don't know how to fix it, they should mark the package broken. Also saves hydra resources. | 09:47:02 | |
In reply to @k900:0upti.meyes. we should make more checks red imo but a bit off topic | 09:48:02 | |
| If you make build failures red now, you'll get timeouts red. | 09:49:56 | |
| So? Would that be a bad thing? | 12:11:42 | |
| If things time out we probably want to restart them anyway | 12:11:53 | |
| Staging times out all the time | 12:14:22 | |
| So probably not the best idea | 12:14:27 | |
| so exclude PRs targeting staging? I really want to push for that because most new people think grey can be ignored which it usually cannot especially for new packages. | 13:59:08 | |
| * so exclude PRs targeting staging? I really want to push for that because many new people think grey can be ignored which it usually cannot especially for new packages. | 13:59:20 | |
| Chromium update will timeout on its own | 15:26:08 | |
| I also occasionally get ofborg timeouts for larger packages or if some big rebuilds (e.g. nodejs) are on master (generally just for darwin builders though -- usually linux builders don't suffer from build timeouts unless it's to staging or something is wrong). Could ofborg differentiate between timeout/failure and set timeout to neutral and propagate the failure red? | 15:30:01 | |
| (I think massive rebuilds to master should be red regardless, but I do wish build failures were also more obvious to others with ofborg) | 15:31:07 | |
| Well, some large single-package updates that will timeout on their own are master-targeted because they are too often security updates. (The list of transient failures to keep gray so that people stop caring about red is a complicated question; are ENOSPC rare enough?) | 15:53:11 | |
In reply to @sandro:supersandro.de we could comment if https://docs.github.com/en/actions/managing-issues-and-pull-requests/commenting-on-an-issue-when-a-label-is-added | 17:50:13 | |
| Oh that's probably easier than hacking this into ofborg | 17:52:49 | |
| That's actually way easier, wow | 17:53:44 | |
| (Oh, and of course red on build failures in OfBorg is pure negative without recognition of dep failures — marking stuff with currently-being-fixed deps as broken just because of them is absolutely pointless) | 18:29:15 | |
In reply to @7c6f434c:nitro.chat(Does that occur often? Marking something red does not prevent merging if it really needs) | 18:32:56 | |
| I think it occurs quite often if you consider Darwin a platform, or if you consider staging a branch | 18:33:49 | |
| And OfBorg design choice is extremely low tolerance to making pointless noise red. | 18:35:02 | |
In reply to @7c6f434c:nitro.chatProblems that are transitive dep failures though, not timeouts or anything? | 18:35:14 | |
In reply to @7c6f434c:nitro.chatYes I really really would not like pointless red. That would make it useless | 18:35:33 | |
| (think = I have impression of having seen it on a large share of failures there) | 18:35:41 | |
| Memory is fallible, so any number I try to come up from memory with should not be trusted. But it takes some «training set» to get to the stage «ah, to the surprise of absolutely no one…» | 18:37:18 | |
| * Memory is fallible, so any number I try to come up from memory with should not be trusted. But it takes some «training set» to get to the stage of «ah, to the surprise of absolutely no one…» reaction | 18:37:28 | |
| * Memory is fallible, so any number I try to come up with from memory should not be trusted. But it takes some «training set» to get to the stage of «ah, to the surprise of absolutely no one…» reaction | 18:38:07 | |
| Yeah I sure don't have hard numbers. I know I see timeouts semi-often. I was just wondering if it could be differentiated since sometimes people don't notice when an ofborg build on, say, darwin for a new package is failing and they either need to fix it or mark it broken to avoid wasting resources | 18:40:49 | |
| … and of course the structure of OfBorg job dispatching does not fit well the claims like «a freshly added package is failing» | 18:44:07 | |