| 3 Mar 2025 |
John Ericson | until I have time to fix it properly | 15:13:48 |
| 6 Mar 2025 |
polygon_ | Hello, is there a way to get lists (newly failing jobs, still failing jobs) as JSON or another easily machine readable format? E.g. https://hydra.nixos.org/eval/1810654?full=1#tabs-now-fail | 18:09:04 |
K900 | No | 18:10:55 |
das_j | In reply to @k900:0upti.me No Actually I crawl them for zh.fail and parse the HTML. Maybe I can just serve the cache files with nginx 🤔 | 19:25:49 |
polygon_ | Do you happen to also crawl the logs? I noticed that quite some packages failed (and less popular ones still do) after moving to GCC14 due to some warnings that got turned to errors. I compiled a list of packages that failed in the first eval after the GCC change and a current eval. Identified 400 packages that failed first then and still fail now. Would the Hydra people be unhappy if I pulled all the logs for that, the ones caused by these warnings should be easily identifiable. | 19:56:25 |
K900 | Logs are available on the S3 | 19:56:59 |
K900 | You can pull from there | 19:57:02 |
K900 | That's basically free | 19:57:11 |
polygon_ | Do I just use the Hydra links (e.g. https://hydra.nixos.org/build/291787217/nixlog/1/raw ) or something else? | 19:58:36 |
polygon_ | Basically got a list of build IDs | 19:58:45 |
K900 | Those Hydra links redirect to https://cache.nixos.org/log/3ipspns3k0gk8v9yp775w0blg8l6mm5w-clasp-2.6.0.drv | 19:59:07 |
K900 | That last part is just the drv name | 19:59:20 |
K900 | Or do you not have those? | 19:59:28 |
polygon_ | I do not think so, is the redirect expensive on Hydra? I just have everything that is in the list of failed builds. | 19:59:53 |
K900 | Expensive-ish | 20:00:07 |
K900 | If you have the attrnames, just run an eval locally | 20:00:19 |
K900 | And cross-reference | 20:00:24 |
K900 | A full nixpkgs eval takes 10-15 minutes and is notably free for Hydra | 20:00:54 |
ghpzin (moved to @ghpzin:envs.net) | I assumed it would use json api endpoints for that, assuming you still crawl every build by id after pulling list from eval. | 20:01:57 |
das_j | Let me check | 20:02:36 |
ghpzin (moved to @ghpzin:envs.net) | * I assumed it would use json api endpoints for that, assuming you still crawl every build by id after pulling list from eval.
https://github.com/NixOS/hydra/blob/master/doc/manual/src/api.md | 20:02:47 |
das_j | given a build ID, I can give you lines in the form of auctex.aarch64-darwin 212687937 auctex-12.3 aarch64-darwin Failed. Would that be helpful? | 20:03:11 |
das_j | Redacted or Malformed Event | 20:03:22 |
polygon_ | I would need a mapping to the store path, then I can get them from S3 directly | 20:03:42 |
das_j | the second field is the build id, maybe that helps? | 20:03:58 |
ghpzin (moved to @ghpzin:envs.net) | * I assumed it would use json api endpoints for that, if you still crawl every build by id after pulling list from eval.
https://github.com/NixOS/hydra/blob/master/doc/manual/src/api.md | 20:04:40 |
polygon_ | I already got all of that from the HTML list, but for the log I'd need the drv-path with the store hash: 3ipspns3k0gk8v9yp775w0blg8l6mm5w-clasp-2.6.0.drv | 20:05:26 |
polygon_ | I probably go with evaluating locally, it's just a subset of 400 packages | 20:06:17 |
| 7 Mar 2025 |
polygon_ | Thanks for the help yesterday, list with still failing builds after gcc-14 update is complete: https://polygon.github.io/fix-nixpkgs-gcc14/ | 15:43:18 |
| 10 Mar 2025 |
| @aftix:matrix.org joined the room. | 05:18:27 |