| 16 Feb 2026 |
| kslad joined the room. | 23:40:04 |
| 17 Feb 2026 |
Lun | too late for 2.10 | 00:02:10 |
Gaétan Lepage | Ok, I guess we'll just wait for the next release then? | 20:09:10 |
Gaétan Lepage | * Ok, I guess we'll just wait for the next release then. | 20:09:12 |
| 20 Feb 2026 |
| youthlic changed their profile picture. | 08:15:56 |
| weasel joined the room. | 15:53:51 |
| 21 Feb 2026 |
Kevin Mittman (UTC-8) | Is there an hardcap on input tarball size? Considering splitting a particularly large one into multiple components | 01:54:03 |
connor (burnt/out) (UTC-8) | Not that I’m aware of. But such a change would require some rework on the Nixpkgs side to recombine sources or know which components to pick (which isn’t necessarily bad, just a thing that would need to happen). | 17:20:26 |
| 22 Feb 2026 |
| Haze joined the room. | 02:54:28 |
| 24 Feb 2026 |
Bot_wxt1221 | Why nvidia-open beta is still broken? | 01:30:50 |
Bot_wxt1221 | None cares it? | 01:30:58 |
Kevin Mittman (UTC-8) | Broken in what way? Which version is it 590.48.01 ? | 03:13:32 |
connor (burnt/out) (UTC-8) | The drivers are maintained by a different team -- we are the NixOS CUDA team | 04:19:31 |
hexa (UTC+1) | is partially opting into cuda support even a supported thing? https://github.com/NixOS/nixpkgs/pull/489829/changes#r2841902335 | 15:40:55 |
connor (burnt/out) (UTC-8) | It is not | 16:51:10 |
connor (burnt/out) (UTC-8) | That’s partly why it’s a global config option
Otherwise people get inconsistent closures and failure modes that make me want to yell at people :( | 16:51:31 |
hexa (UTC+1) | Thanks! | 16:52:11 |
Gaétan Lepage | Redacted or Malformed Event | 23:20:03 |
Gaétan Lepage | 🔥 PyTorch 2.10.0 🔥
Changelog: https://github.com/pytorch/pytorch/releases/tag/v2.10.0
PR: https://github.com/NixOS/nixpkgs/pull/484881
PR tracker: https://nixpkgs-tracker.ocfox.me/?pr=484881
This one took quite a while to bring to nixpkgs, and I'm glad to have finally gotten it merged!
Some basic testing has been done (basic builds, cudaSupport builds, some gpuChecks).
However, exhaustively building and testing all the downstream dependencies isn't feasible (at least without more time and hardware).
-> Please, don't hesitate to report any breakage in this channel, and feel free to ping me as well.
Thanks a lot to everyone who helped, and more generally to everyone else for your patience. | 23:30:01 |
Gaétan Lepage | * 🔥 PyTorch 2.10.0 (+ triton 3.6.0) 🔥
Changelog: https://github.com/pytorch/pytorch/releases/tag/v2.10.0
PR: https://github.com/NixOS/nixpkgs/pull/484881
PR tracker: https://nixpkgs-tracker.ocfox.me/?pr=484881
This one took quite a while to bring to nixpkgs, and I'm glad to have finally gotten it merged!
Some basic testing has been done (basic builds, cudaSupport builds, some gpuChecks).
However, exhaustively building and testing all the downstream dependencies isn't feasible (at least without more time and hardware).
-> Please, don't hesitate to report any breakage in this channel, and feel free to ping me as well.
Thanks a lot to everyone who helped, and more generally to everyone else for your patience. | 23:32:32 |
| 25 Feb 2026 |
Hugo | Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔 Torch 2.10.0 does not solve that issue.
I build with:
run: "cd nixpkgs-master\n nix-build -I nixpkgs=. --arg config '{ allowUnfree\
\ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
\ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"
Here is the last part of my build log (after 3h12 of build). https://pad.lassul.us/s/xWhnOmbeo2#
Any idea what could be the cause?
| 15:20:41 |
hexa (UTC+1) | log looks incomplete | 15:21:09 |
Hugo | HedgeDoc does not allow me to copy the complete log | 15:21:26 |
hexa (UTC+1) | bpa.st | 15:21:32 |
hexa (UTC+1) | * https://bpa.st | 15:21:35 |
Hugo | https://bpa.st/5CQA | 15:22:04 |
Hugo | * Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔 Torch 2.10.0 does not solve that issue.
I build with:
run: "cd nixpkgs-master\n nix-build -I nixpkgs=. --arg config '{ allowUnfree\
\ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
\ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"
Here is the last log (after 3h12 of build). https://bpa.st/5CQA
Any idea what could be the cause?
| 15:22:17 |
Hugo | Edited my message with this link as well. | 15:22:31 |
Gaétan Lepage | Are you sure that you're not simply OOMing? | 19:43:36 |
Hugo | I investigated my metrics, it looks like OOM-ing for 2.10, but not for the latest failure of 2.9.1 🤔 | 20:28:31 |