| 22 Feb 2026 |
| Haze joined the room. | 02:54:28 |
| 24 Feb 2026 |
Bot_wxt1221 | Why nvidia-open beta is still broken? | 01:30:50 |
Bot_wxt1221 | None cares it? | 01:30:58 |
Kevin Mittman (UTC-8) | Broken in what way? Which version is it 590.48.01 ? | 03:13:32 |
connor (burnt/out) (UTC-8) | The drivers are maintained by a different team -- we are the NixOS CUDA team | 04:19:31 |
hexa (UTC+1) | is partially opting into cuda support even a supported thing? https://github.com/NixOS/nixpkgs/pull/489829/changes#r2841902335 | 15:40:55 |
connor (burnt/out) (UTC-8) | It is not | 16:51:10 |
connor (burnt/out) (UTC-8) | That’s partly why it’s a global config option
Otherwise people get inconsistent closures and failure modes that make me want to yell at people :( | 16:51:31 |
hexa (UTC+1) | Thanks! | 16:52:11 |
Gaétan Lepage | Redacted or Malformed Event | 23:20:03 |
Gaétan Lepage | 🔥 PyTorch 2.10.0 🔥
Changelog: https://github.com/pytorch/pytorch/releases/tag/v2.10.0
PR: https://github.com/NixOS/nixpkgs/pull/484881
PR tracker: https://nixpkgs-tracker.ocfox.me/?pr=484881
This one took quite a while to bring to nixpkgs, and I'm glad to have finally gotten it merged!
Some basic testing has been done (basic builds, cudaSupport builds, some gpuChecks).
However, exhaustively building and testing all the downstream dependencies isn't feasible (at least without more time and hardware).
-> Please, don't hesitate to report any breakage in this channel, and feel free to ping me as well.
Thanks a lot to everyone who helped, and more generally to everyone else for your patience. | 23:30:01 |
Gaétan Lepage | * 🔥 PyTorch 2.10.0 (+ triton 3.6.0) 🔥
Changelog: https://github.com/pytorch/pytorch/releases/tag/v2.10.0
PR: https://github.com/NixOS/nixpkgs/pull/484881
PR tracker: https://nixpkgs-tracker.ocfox.me/?pr=484881
This one took quite a while to bring to nixpkgs, and I'm glad to have finally gotten it merged!
Some basic testing has been done (basic builds, cudaSupport builds, some gpuChecks).
However, exhaustively building and testing all the downstream dependencies isn't feasible (at least without more time and hardware).
-> Please, don't hesitate to report any breakage in this channel, and feel free to ping me as well.
Thanks a lot to everyone who helped, and more generally to everyone else for your patience. | 23:32:32 |
| 25 Feb 2026 |
Hugo | Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔 Torch 2.10.0 does not solve that issue.
I build with:
run: "cd nixpkgs-master\n nix-build -I nixpkgs=. --arg config '{ allowUnfree\
\ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
\ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"
Here is the last part of my build log (after 3h12 of build). https://pad.lassul.us/s/xWhnOmbeo2#
Any idea what could be the cause?
| 15:20:41 |
hexa (UTC+1) | log looks incomplete | 15:21:09 |
Hugo | HedgeDoc does not allow me to copy the complete log | 15:21:26 |
hexa (UTC+1) | bpa.st | 15:21:32 |
hexa (UTC+1) | * https://bpa.st | 15:21:35 |
Hugo | https://bpa.st/5CQA | 15:22:04 |
Hugo | * Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔 Torch 2.10.0 does not solve that issue.
I build with:
run: "cd nixpkgs-master\n nix-build -I nixpkgs=. --arg config '{ allowUnfree\
\ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
\ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"
Here is the last log (after 3h12 of build). https://bpa.st/5CQA
Any idea what could be the cause?
| 15:22:17 |
Hugo | Edited my message with this link as well. | 15:22:31 |
Gaétan Lepage | Are you sure that you're not simply OOMing? | 19:43:36 |
Hugo | I investigated my metrics, it looks like OOM-ing for 2.10, but not for the latest failure of 2.9.1 🤔 | 20:28:31 |
Hugo | I started a build with less cores, will see where that goes (mostl likely from 3.10 to ~5 hours) | 20:29:13 |
| 4 Aug 2022 |
| Winter (she/her) joined the room. | 03:26:42 |
Winter (she/her) | (hi, just came here to read + respond to this.) | 03:28:52 |
tpw_rules | hey. i had previously sympathzied with samuela and like i said before had some of the same frustrations. i just edited my github comment to add "[CUDA] packages are universally complicated, fragile to package, and critical to daily operations. Nix being able to manage them is unbelievably helpful to those of us who work with them regularly, even if support is downgraded to only having an expectation of function on stable branches." | 03:29:14 |
Winter (she/her) | In reply to @tpw_rules:matrix.org i'm mildly peeved about a recent merging of something i maintain where i'm pretty sure the merger does not own the expensive hardware required to properly test the package. i don't think it broke anything but i was given precisely 45 minutes to see the notification before somebody merged it ugh, 45 minutes? that's... not great. not to air dirty laundry but did you do what samuela did in the wandb PR and at least say that that wasn't a great thing to do? (not sure how else to word that, you get what i mean) | 03:30:23 |
tpw_rules | no, i haven't yet, but i probably will | 03:31:03 |
Winter (she/her) | i admittedly did that with a PR once, i forget how long the maintainer was requested for but i merged it because multiple people reported it fixed the issue. the maintainer said "hey, don't do that" after and now i do think twice before merging. so it could help, is what i'm saying. | 03:31:50 |
tpw_rules | i'm not sure what went wrong with the wandb PR anyway, i think it was just a boneheaded move on the maintainer's part | 03:32:10 |