| 28 Oct 2021 |
moritz.hedtke | Oh I mostly meant the machine-generated files where you could probably just regenerate. For other conflicts there may be special cases like sets (unordered lists) where you could merge in any way. But I thought about the whole thing and I think it's not super easy with the current model of code. If code would be stored as AST it would be way easier | 18:22:16 |
Sandro | even if it would be stored as AST we would need to know which files behave like this or that but that is going really OT from staging | 20:32:11 |
moritz.hedtke | In reply to @sandro:supersandro.de even if it would be stored as AST we would need to know which files behave like this or that but that is going really OT from staging yes it would need close interaction between the language and the version control but this is really offtopic | 20:32:47 |
moritz.hedtke | still the regeneration of generated files would probably be feasible and solve some pain-points | 20:33:26 |
moritz.hedtke | * still the automatic regeneration of generated files would probably be feasible and solve some pain-points | 20:33:34 |
Sandro | not sure if that would solve more problems than it would create. | 20:38:56 |
moritz.hedtke | In reply to @sandro:supersandro.de It would be really grateful if there would be a good way to prevent this things other than keeping track which packages changed on which branch already I'm not knowledgeable with the exact processes but probably you're right and your original point is the best idea. | 20:40:32 |
Sandro | 🤔 github already has a feature where it shows you which PRs also modify your current file. I think it is broken on nixpkgs for obvious reasons but that for PRs would be nice.
but that is out of reach for us because GitHub would probably need to add new API endpoints for it | 20:45:36 |
| 30 Oct 2021 |
Ryan Burns | Seems like it would be fairly low-overhead to add a PR action which attempts to locally merge the PR into staging. If the merge would be nontrivial, notify the staging crew. | 01:30:05 |
Sandro | Or let it fail or something. | 07:47:57 |
moritz.hedtke | https://github.com/NixOS/nixpkgs/pull/143800 others please also comment I don't want to "decide" this in my own especially as I'm not really knowledgeable in that area. Das_j seems to have thumbsupped so I assume that's agreement but still | 13:18:20 |
| fabianhjr joined the room. | 15:32:12 |
hexa | ping darwin maintainers and pray | 19:05:42 |
| 31 Oct 2021 |
Ryan Burns | Looks like the libdazzle dependency is blocking some user-facing gnome/pantheon applications, but I couldn't reproduce its test failure | 22:39:23 |
| 1 Nov 2021 |
VladimÃr ÄŒunát | If a test is flaky, it's usually best to just disable it. (or at least make it non-blocking somehow) | 06:51:00 |
VladimÃr ÄŒunát | That is, assuming that the failures don't indicate some real problem that could affect functionality. | 06:51:47 |
hexa | sphinx on x86_64-darwin had a transient error due to a lack in network sandboxing and resulting socket collisisons | 14:15:29 |
hexa | * sphinx on x86_64-darwin had a transient error due to a lack in network sandboxing and a resulting socket collisisons | 14:15:38 |
hexa | * sphinx on x86_64-darwin had a transient error due to a lack in network sandboxing and a resulting socket collisison | 14:15:40 |
hexa | restarted, it blocked nix-info … now we build ghc since 3.5h | 14:16:04 |
sterni | I wonder how old the mac builders are | 14:22:51 |
sterni | GHC should be below an hour even on a normal CPU | 14:23:10 |
sterni | unless it somehow is slower on darwin which would be weird though | 14:23:24 |
hexa | or the host is sufficiently partitioned with enough paralell jobs | 14:24:26 |
hexa | https://hydra.nixos.org/build/157060609 | 14:25:01 |
hexa | any bigger issues open or can we merge soon? | 18:27:28 |
VladimÃr ÄŒunát | Right now I see over 8k build regressions: https://hydra.nixos.org/eval/1718058?compare=1718001 | 19:50:19 |
VladimÃr ÄŒunát | Well, over 7k if we subtract newly succeeding builds (and add aarch64-darwin diff from a separate comparison). | 19:51:51 |
Ryan Burns | It looks like most of that is due to the transient sphinx error though, what do we do about that? | 19:53:11 |
VladimÃr ÄŒunát | The queue for x86_64-darwin was even empty now. So now I restarted all staging-next failures (in the last evaluation). | 19:54:19 |