OfBorg | 173 Members | |
| Number of builds and evals in queue: <TBD> | 65 Servers |
| Sender | Message | Time |
|---|---|---|
| 12 Oct 2023 | ||
| (i've also been a bit busy the past few weeks since the OC may have come during a, uh, life event.... but i'll have time again this weekend to get some of that going) | 15:20:05 | |
| small stuff I'd like to see ofborg do better:
larger stuff:
| 15:39:55 | |
| since you're asking... :p | 15:40:05 | |
| The small stuff you listed and ability for mortals to run pieces of or all of ofborg locally are definitely pain points i'm looking at helping short-term. I appreciate you making the list ❤️ | 15:42:31 | |
| yeah I don't think anything here is groundbreaking :) | 15:43:49 | |
| (also local testing will let even people with infra access not have to test changes in prod 😅) | 15:44:01 | |
| "properly mark errors as errors" - yes, this times 100 | 15:49:24 | |
| another "larger stuff" topic: I'm not sure if ofborg auto-scales based on queue length, but there's been a few times recently where it's 4-6h behind on processing PRs, and I wonder if we could just throw more compute at it | 15:58:44 | |
| I've already tried that (manually), unfortunately. A few years ago, 3-4 ofborg evaluators was enough to chew through the queue. Nowadays, even 9 is not enough, due to eval times blowing up. | 15:59:45 | |
| Also, I don't know how I feel about marking "errors as errors" (I assume this means "failed builds turn into failed checks"). There could be any number of reasons as why the build failed that may not have anything to do with the derivation itself. Maybe the machine OOM'd. Maybe networking died. Maybe the kernel panicked. Maybe there was a hardware failure. Maybe.... Something that was decided early on was that things with a red X should not be merged under any circumstance (as always, there are exceptions, but those should be very rare). If one of those transient (or not so transient) failures happens, but nobody can reproduce it and someone decides to merge it anyways, that cheapens the meaning of a failed CI check. At least with a "skipped" check, its communicated that something may have gone wrong, but it may not be anyone in particular's fault. | 16:03:02 | |
| * Also, I don't know how I feel about marking "errors as errors" (I assume this means "failed builds turn into failed checks"). There could be any number of reasons as why the build failed that may not have anything to do with the derivation itself. Maybe the machine OOM'd. Maybe networking died. Maybe the kernel panicked. Maybe there was a hardware failure. Maybe.... Something that was decided early on was that things with a red X should not be merged under any circumstance (as always, there are exceptions, but those should be very rare). If one of those transient (or not so transient) failures happens, but nobody can reproduce it and someone decides to merge it anyways, that cheapens the meaning of a failed CI check. At least with a "skipped" check, it's communicated that something may have gone wrong, but it may not be anyone in particular's fault. | 16:03:05 | |
| (Not to say I'd block that change, per se, but it'd be nice to be convinced that it's the right thing to do.) | 16:03:49 | |
| allowing people to retry failed runs and figuring out how to address infra flakiness seem like they'd both help there - fwiw I've rarely seen ofborg failing for the reasons you're listing, and they seem to be all transient conditions | 16:04:44 | |
| (could even do something like "retries get scheduled on a different runner" if we wanted to be fancy :p) | 16:05:06 | |
| I agree that we should at the very least try to measure how often these problems happen before making any decision, but I don't think a low rate of false positives necessarily needs to be a blocker - it would still be a massive improvement | 16:06:01 | |
| (imo) | 16:06:03 | |