!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

300 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda59 Servers

Load older messages


SenderMessageTime
24 Feb 2026
@glepage:matrix.orgGaétan Lepage *

🔥 PyTorch 2.10.0 (+ triton 3.6.0) 🔥


Changelog: https://github.com/pytorch/pytorch/releases/tag/v2.10.0
PR: https://github.com/NixOS/nixpkgs/pull/484881
PR tracker: https://nixpkgs-tracker.ocfox.me/?pr=484881


This one took quite a while to bring to nixpkgs, and I'm glad to have finally gotten it merged!

Some basic testing has been done (basic builds, cudaSupport builds, some gpuChecks).
However, exhaustively building and testing all the downstream dependencies isn't feasible (at least without more time and hardware).
-> Please, don't hesitate to report any breakage in this channel, and feel free to ping me as well.

Thanks a lot to everyone who helped, and more generally to everyone else for your patience.

23:32:32
25 Feb 2026
@hugo:okeso.euHugo

Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔
Torch 2.10.0 does not solve that issue.

I build with:

      run: "cd nixpkgs-master\n  nix-build -I nixpkgs=. --arg config '{ allowUnfree\
        \ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
        \ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"

Here is the last part of my build log (after 3h12 of build).
https://pad.lassul.us/s/xWhnOmbeo2#

Any idea what could be the cause?

15:20:41
@hexa:lossy.networkhexa (UTC+1)log looks incomplete15:21:09
@hugo:okeso.euHugoHedgeDoc does not allow me to copy the complete log15:21:26
@hexa:lossy.networkhexa (UTC+1)bpa.st15:21:32
@hexa:lossy.networkhexa (UTC+1)* https://bpa.st15:21:35
@hugo:okeso.euHugohttps://bpa.st/5CQA15:22:04
@hugo:okeso.euHugo *

Related to PyTorch but not to 2.10, it looks like my CI cannot build it since yesterday morning (it was still python3.13-torch-2.9.1) 🤔
Torch 2.10.0 does not solve that issue.

I build with:

      run: "cd nixpkgs-master\n  nix-build -I nixpkgs=. --arg config '{ allowUnfree\
        \ = true; cudaSupport = true; openclSupport = true; rocmSupport = false;}'\
        \ --option allow-import-from-derivation false --max-jobs 1 -A python313Packages.torch"

Here is the last log (after 3h12 of build).
https://bpa.st/5CQA

Any idea what could be the cause?

15:22:17
@hugo:okeso.euHugoEdited my message with this link as well.15:22:31
@glepage:matrix.orgGaétan LepageAre you sure that you're not simply OOMing?19:43:36
@hugo:okeso.euHugoI investigated my metrics, it looks like OOM-ing for 2.10, but not for the latest failure of 2.9.1 🤔20:28:31
@hugo:okeso.euHugoI started a build with less cores, will see where that goes (mostl likely from 3.10 to ~5 hours)20:29:13
26 Feb 2026
@ctheune:matrix.flyingcircus.ioTheuni joined the room.07:05:25
@ctheune:matrix.flyingcircus.ioTheuni👋07:09:09
@ctheune:matrix.flyingcircus.ioTheuni Gaétan Lepage: if you need help testing the vllm 0.16 branch, i can do that. 07:27:08
@glepage:matrix.orgGaétan Lepage

Hi! Thanks :)
I rebased the PR yesterday: https://github.com/NixOS/nixpkgs/pull/490175

I couldn't do much as its dependendency xgrammar is broken. Once I'll properly bump the {c,q}utlass et al. overrides, then sure, help with testing will be welcome :)

07:57:01
@glepage:matrix.orgGaétan Lepage From my initial testing, 2.10 is significantly heavier to build.
Took me >1h instead of 45 min to build with cudaSupport on a 64T Threadripper
07:58:09
28 Feb 2026
@justbrowsing:matrix.orgKevin Mittman (jetlagged/UTC+8) changed their display name from Kevin Mittman (UTC-8) to Kevin Mittman (UTC+8).05:22:22
@glepage:matrix.orgGaétan Lepage

RE: vllm update to 0.16.0

Although vllm 0.16.0 was initially depending on torch==2.10.0, this changed.
Indeed, upstream force-pushed the tag to revert this change and go back to requiring torch==2.9.1.

I learnt about this by chance, discussing this wis a coworker.

10:28:07
2 Mar 2026
@justbrowsing:matrix.orgKevin Mittman (jetlagged/UTC+8) changed their display name from Kevin Mittman (UTC+8) to Kevin Mittman (jetlagged/UTC+8).01:12:50
@hexa:lossy.networkhexa (UTC+1) @SomeoneSerge (back on matrix) https://hydra.nixos.org/build/323045198 👋 15:28:08
@blazetalonshorns92646:matrix.orgMaren Horner joined the room.20:37:42
@glepage:matrix.orgGaétan Lepage

https://github.com/nixos-cuda/infra/commit/eeb5eb95d8eb0abba5fa50d14500c0d20a2e5d12

🥲

23:46:37
@lt1379:matrix.orgLun😭23:47:25
@hexa:lossy.networkhexa (UTC+1)Yeah, I'm afraid unless we don't GC harder this is going to be tought sell.23:51:53
@hexa:lossy.networkhexa (UTC+1)* Yeah, I'm afraid unless we GC harder this is going to be tought sell.23:51:57
@hexa:lossy.networkhexa (UTC+1)* Yeah, I'm afraid unless we GC harder this is going to be tough sell.23:52:02
3 Mar 2026
@ss:someonex.netSomeoneSerge (back on matrix)=\16:23:53
@caniko:matrix.orgcanikoany chance to build gimp and handbrake?19:53:02
5 Mar 2026
@kaya:catnip.eekaya 𖤐 Not sure if it's been mentioned here before but, for anyone affected by flash-attn builds OOM-ing. I noticed an upstream patch that tries to counter it https://github.com/Dao-AILab/flash-attention/pull/2079
Might be possible to apply it to the nix package 🤔
13:26:33

Show newer messages


Back to Room ListRoom Version: 9