!sBfrWMVsLoSyFTCkNv:nixos.org

OfBorg

173 Members
Number of builds and evals in queue: <TBD>65 Servers

Load older messages


SenderMessageTime
10 Oct 2023
@lily:lily.flowersLily Foster* The most expensive parts of the eval checks by far are outpath calculation (with or without meta checks), which wouldn't be any less here. I do actually intend to make that less expensive in general because it's getting pretty obscene at this point, but I'm admittedly a bit scared to dive into what horrors lie in all that10:27:44
11 Oct 2023
@obfusk:matrix.org幸猫 (𝗍𝗁𝖾𝗒/𝗍𝗁𝖾𝗆) changed their display name from FC (they/them) to Fay (she/her).20:54:43
12 Oct 2023
@adam:robins.wtf@adam:robins.wtf why does ofborg queue nixos tests on darwin? nixosTests.lxd.container on aarch64-darwin 14:53:43
@lily:lily.flowersLily Foster
In reply to @adam:robins.wtf
why does ofborg queue nixos tests on darwin? nixosTests.lxd.container on aarch64-darwin
that probably could actually be optimized to not bother queueing builds on platforms that won't eval for darwin, but it would take some refactoring iirc and also it doesn't really cause much delay to do so and let eval fail on the builder and report it back (except when the aarch64-darwin builders get backed up and so checks stay pending for way too long....)
14:59:48
@lily:lily.flowersLily Fosterbut it's not a nixos test specific problem15:00:27
@lily:lily.flowersLily Foster* that probably could actually be optimized to not bother queueing builds on platforms that won't eval, e.g. for darwin, but it would take some refactoring iirc and also it doesn't really cause much delay to do so and let eval fail on the builder and report it back (except when the aarch64-darwin builders get backed up and so checks stay pending for way too long....)15:00:47
@adam:robins.wtf@adam:robins.wtfRight :)15:04:55
@adam:robins.wtf@adam:robins.wtfI guess I just figure no need to spend any cycles doing an eval that will never succeed15:06:04
@adam:robins.wtf@adam:robins.wtfespecially given that, as you point out, the aarch64-darwin builders get backed up15:06:20
@lily:lily.flowersLily Fosteroh i agree no need to, just also saying it's a bit due to how ofborg handles that right now15:06:37
@lily:lily.flowersLily Fosterit's on my "nice to have" list of those i'm vaguely putting together15:06:56
@lily:lily.flowersLily Foster(i really gotta pretty up some lists and plans and get community feedback going for ofborg soon...)15:07:53
@adam:robins.wtf@adam:robins.wtfYeah it'd be nice to see where you think the priorities are :)15:13:25
@lily:lily.flowersLily Fosterit'd be more nice to see what others think they should be tbh, because i usually am not great at prioritization and often have bad ideas....15:17:23
@lily:lily.flowersLily Foster(i've also been a bit busy the past few weeks since the OC may have come during a, uh, life event.... but i'll have time again this weekend to get some of that going)15:20:05
@delroth:delroth.net@delroth:delroth.net

small stuff I'd like to see ofborg do better:

  • better handle changes to NixOS modules: automatically run tests, notify maintainers
  • properly mark errors as errors, not just "skipped" (which is easy to miss) - and yes, that means people will have to fix their broken/flaky tests
  • maybe split off the eval to compute number of rebuilds into a separate async step, so that maintainers notification / builds / tests can start while that long eval step gets processed

larger stuff:

  • merge queue for nixpkgs when :) or at least a way to auto-merge a PR once eval+builds+tests have completed and are successful
  • auto nixpkgs-review-like rebuild of dependents for changes with few dependents, maybe same but with some sampling for changes that impact more derivations (smoke test, basically)
  • figuring out a way to have a local testing setup for ofborg so contributions aren't limited to people with access to the infra (maybe via a public tee of webhooks events to handle dynamic public/anonymous subscribers, and a dry run mode that doesn't try to perform actions on github)
15:39:55
@delroth:delroth.net@delroth:delroth.netsince you're asking... :p15:40:05
@lily:lily.flowersLily FosterThe small stuff you listed and ability for mortals to run pieces of or all of ofborg locally are definitely pain points i'm looking at helping short-term. I appreciate you making the list ❀️15:42:31
@delroth:delroth.net@delroth:delroth.netyeah I don't think anything here is groundbreaking :)15:43:49
@lily:lily.flowersLily Foster(also local testing will let even people with infra access not have to test changes in prod πŸ˜…)15:44:01
@adam:robins.wtf@adam:robins.wtf"properly mark errors as errors" - yes, this times 10015:49:24
@delroth:delroth.net@delroth:delroth.netanother "larger stuff" topic: I'm not sure if ofborg auto-scales based on queue length, but there's been a few times recently where it's 4-6h behind on processing PRs, and I wonder if we could just throw more compute at it15:58:44
@cole-h:matrix.orgcole-h I've already tried that (manually), unfortunately. A few years ago, 3-4 ofborg evaluators was enough to chew through the queue. Nowadays, even 9 is not enough, due to eval times blowing up. 15:59:45
@cole-h:matrix.orgcole-hAlso, I don't know how I feel about marking "errors as errors" (I assume this means "failed builds turn into failed checks"). There could be any number of reasons as why the build failed that may not have anything to do with the derivation itself. Maybe the machine OOM'd. Maybe networking died. Maybe the kernel panicked. Maybe there was a hardware failure. Maybe.... Something that was decided early on was that things with a red X should not be merged under any circumstance (as always, there are exceptions, but those should be very rare). If one of those transient (or not so transient) failures happens, but nobody can reproduce it and someone decides to merge it anyways, that cheapens the meaning of a failed CI check. At least with a "skipped" check, its communicated that something may have gone wrong, but it may not be anyone in particular's fault.16:03:02
@cole-h:matrix.orgcole-h * Also, I don't know how I feel about marking "errors as errors" (I assume this means "failed builds turn into failed checks"). There could be any number of reasons as why the build failed that may not have anything to do with the derivation itself. Maybe the machine OOM'd. Maybe networking died. Maybe the kernel panicked. Maybe there was a hardware failure. Maybe.... Something that was decided early on was that things with a red X should not be merged under any circumstance (as always, there are exceptions, but those should be very rare). If one of those transient (or not so transient) failures happens, but nobody can reproduce it and someone decides to merge it anyways, that cheapens the meaning of a failed CI check. At least with a "skipped" check, it's communicated that something may have gone wrong, but it may not be anyone in particular's fault.16:03:05
@cole-h:matrix.orgcole-h(Not to say I'd block that change, per se, but it'd be nice to be convinced that it's the right thing to do.)16:03:49
@delroth:delroth.net@delroth:delroth.netallowing people to retry failed runs and figuring out how to address infra flakiness seem like they'd both help there - fwiw I've rarely seen ofborg failing for the reasons you're listing, and they seem to be all transient conditions16:04:44
@delroth:delroth.net@delroth:delroth.net(could even do something like "retries get scheduled on a different runner" if we wanted to be fancy :p)16:05:06
@delroth:delroth.net@delroth:delroth.net I agree that we should at the very least try to measure how often these problems happen before making any decision, but I don't think a low rate of false positives necessarily needs to be a blocker - it would still be a massive improvement 16:06:01
@delroth:delroth.net@delroth:delroth.net(imo)16:06:03

Show newer messages


Back to Room ListRoom Version: 6