12 Sep 2025 |
connor (he/him) (UTC-7) | Okay what should nix-community/cuda-legacy look or be structured like? As an example: supporting CUDA 11. NCCL has already cut its last release supporting CUDA 11, so we need a package expression for that in the repo. Then there’s PyTorch: if that’s already cut its last release supporting CUDA 11, we need an expression for that as well. In the case of packages with many dependencies, like PyTorch, I’m not sure how long we’d be able to use upstream to provide dependencies, even if we vendor the package expression in tree, because eventually they’ll get bumped to something too new for the version of the package we’re locked to.
Is it viable for cuda-legacy to just re-expose a copy of Nixpkgs fixed to some point in time? I would think not (at least naively) without a way to deduplicate the number of Nixpkgs instantiated (e.g., providing cuda-legacy as an overlay and having it draw relevant packages from a fixed version of Nixpkgs while somehow using the most of the underlying instance of Nixpkgs). | 19:54:54 |
13 Sep 2025 |
| ysndr joined the room. | 00:58:03 |
| oak 🏳️🌈♥️ changed their profile picture. | 09:46:03 |
14 Sep 2025 |
| Emma [it/its] joined the room. | 08:39:56 |
connor (he/him) (UTC-7) | Duncan Gammiethanks for the PR! I’ll try to review it soon, things have just been | 18:22:54 |
15 Sep 2025 |
connor (he/him) (UTC-7) | Gaétan Lepage: as someone who has contributed to nixpkgs-review previously, how difficult do you think it would be to add functionality which tells you if packages were broken on the base commit? As an example, working on foundational CUDA packaging changes the store output paths of basically everything which relies (transitively) on CUDA. That includes a whole bunch of Python packages which are just usually broken but we're unaware of. I'd like to be able to run nixpkgs-review and have it tell me, in addition to the number of packages which failed to build as a result of the PR, which ones were already broken on the base commit. Would that involve a secondary stage of builds wherein all the packages which failed to build and already exist on the base commit are tried? | 02:57:42 |
Gaétan Lepage | This would be an amazingly useful feature. Every maintainer is facing the same issue all the time.
I have certainly already thought about it.
However, this is not too simple. At least, we cannot induce this information from the raw information we have.
nixpkgs-review currently works like this:
- Compute (or fetch) which outPaths have been changed between the base commit and the new one. -> Deduct the set of packages that need to be rebuilt.
- Rebuild those (which are not marked as broken/unsupported) and report the successes/failures.
There is no notion of previous brokenness in what nixpkgs-review knows.
What would be a solution, would be to fetch hydra and gather the latest status of each target package. This would be the easiest way to do it. | 09:52:30 |
connor (he/him) (UTC-7) | Rather than querying Hydra, do you think it would be enough to query all the substitutes the system has configured to see if it’s already cached (indicating success)? That way if it’s something the user has already built on a different machine or has in a different cache, that would be searched. | 14:55:32 |
connor (he/him) (UTC-7) | Lastly, I was looking at the way the list of attributes are generated yesterday. I think I can greatly speed that up and have us benefit from caching.
For example, I was working on an eval-devs CLI addition for Nix based on the parallel evaluator Eelco was working on which also uses the evaluation cache. The idea being that all the evaluation happens in parallel, and the results are cached in the evaluation cache, so if you’re working on your PR and running nixpkgs-review multiple times as you fix things, so long as you keep the same base commit (and use —merge commit or whatever the flag is for nixpkgs-review) you can re-use the evaluation you’ve already compute for the base every time you re-run the review. | 14:58:35 |
connor (he/him) (UTC-7) | Of course, that’s using DetSys Nix, so I’d probably just keep that in my fork to see if it’s worthwhile, but 🤷♂️ | 14:59:22 |
connor (he/him) (UTC-7) | I have so many other un-fun things I have to do 😖 | 15:00:14 |
Gaétan Lepage | Maybe we should keep discussing this in https://matrix.to/#/#nixpkgs-review:matrix.org. | 15:31:11 |
connor (he/him) (UTC-7) | SomeoneSerge (back on matrix)what are we going to use for video calls for our weekly? Signal was fine when it was just us, but my work blocks Jitsi and video calls over matrix. | 15:55:39 |
Gaétan Lepage | Signal should work just fine with a handful people. | 19:33:45 |
Gaétan Lepage | When is the next meeting btw? | 19:33:54 |
Gaétan Lepage | If I understand you correctly, you want to build a local "cache" of build successes/failures? | 19:34:34 |
Gaétan Lepage | For the second part of your message, you are talking about the "local" evaluation right? We already fetch the eval results from the GHA runs. | 19:35:13 |
SomeoneSerge (back on matrix) | Should we try eight tomorrow? | 22:40:34 |
SomeoneSerge (back on matrix) | ^ | 22:40:44 |
Gaétan Lepage | Sure, which timezone? AM or PM? | 22:43:55 |
SomeoneSerge (back on matrix) | am, pacific, iirc | 23:03:24 |
16 Sep 2025 |
connor (he/him) (UTC-7) | Pacific, AM | 00:35:16 |
connor (he/him) (UTC-7) | 8am would be nice | 00:35:46 |
Gaétan Lepage | OK, should be fine! | 08:09:35 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org 8am would be nice @justbrowsing:matrix.org: if you're around? | 10:54:05 |
| Kevin Mittman (UTC+9) changed their display name from Kevin Mittman to Kevin Mittman (UTC+9). | 11:11:43 |
Kevin Mittman (UTC+9) | sorry it's 8pm and trying to go to sleep to shake off the jet lag | 11:13:09 |
ysndr | In reply to @glepage:matrix.org Signal should work just fine with a handful people. Is there a separate signal group to join, or how would one get into the call? | 12:12:40 |
Gaétan Lepage | https://signal.group/#CjQKIK7-VLKtqJFT25O_L_5KG1ITydK2Of0zZRau1SPGjuAHEhDIdWbcfVXKTI_ByFI1lY1L | 12:20:32 |
17 Sep 2025 |
| BerriJ joined the room. | 16:08:43 |