!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

292 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
19 Oct 2024
@hexa:lossy.networkhexa which defaults on config.hardware.enableAllFirmware 12:57:09
@hexa:lossy.networkhexa which defaults to false 12:57:19
@hexa:lossy.networkhexa🤡12:57:20
@hacker1024:matrix.orghacker1024Rip12:57:24
@hexa:lossy.networkhexa thank you nixos-generate-config 12:57:34
@hacker1024:matrix.orghacker1024No guarantee that it'll fix it but it can't hurt12:57:42
@hacker1024:matrix.orghacker1024They have long lists of errata12:57:52
@hexa:lossy.networkhexayeah, worth a try12:58:00
@hexa:lossy.networkhexacan always roll back12:58:03
@hexa:lossy.networkhexathx12:58:58
@connorbaker:matrix.orgconnor (he/him)I keep forgetting I added myself as a maintainer to glibc until I get emails for reviews lmao19:09:59
@connorbaker:matrix.orgconnor (he/him) SomeoneSerge (utc+3) Gaétan Lepage thoughts on having backendStdenv automatically propagate autoAddDriverRunpath and autoPatchelfHook? I feel like forgetting to add the former is a footgun people keep firing, and the latter is a great check to make sure all your dependencies are either present or explicitly ignored. 19:22:47
@glepage:matrix.orgGaétan LepageI am not sure to be qualified to answer properly. From my point of view, these kind of automations indeed help and avoid sneaky mistakes.19:24:29
@hexa:lossy.networkhexa hacker1024: I think your recommendation was spot on 21:27:27
@hexa:lossy.networkhexaat 22% I see the first [pt_main_thread] instances21:27:36
@hexa:lossy.networkhexaand they don't seem to crash with microcode updates applied21:27:50
@hexa:lossy.networkhexawow, I hope that makes python-updates much smoother in the future21:28:06
@ss:someonex.netSomeoneSerge (back on matrix)

a footgun people keep firing,

True

autoAddDriverRunpath

Yes and no. Yes because that'd definitely make one-off and our own contributions easier. No because once we start propagating it we lose the knowledge of which packages actually need to be patched. It still seems to me that most packages we don't have to patch because they call cudart and cudart is patchelfed. Maybe yes because I'm unsure what happens with libcudart_static.

autoPatchelfHook

I'd be rather strongly opposed to this one. Autopatchelf is a huge hammer, coarse and imprecise. It can actually erase correct runpaths from an originally correct binary. Let's reserve it for non

Another important thing to consider is (here we go again) whether we want to keep both backendStdenv and the hook and which of these things should be propagating what

21:29:49
@ss:someonex.netSomeoneSerge (back on matrix)Even right now there's something cursed going on that e.g. triggers a "rebuild" of libcublas when you override nvcc, and I think that happens because of the propagated hook. That at the least is a surprising behaviour21:32:25
@ss:someonex.netSomeoneSerge (back on matrix)Another "no" for autoAddDriverRunpath is that it's also just not enough anyway, because of python and all other sorts of projects where we're forced to use wrappers instead21:38:08
@ss:someonex.netSomeoneSerge (back on matrix) I think I'd vote pro propagation if we could say with some certainty, that that is the only way to make stuff work automatically for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) 21:41:59
@ss:someonex.netSomeoneSerge (back on matrix) * I think I'd vote pro propagation if we could say with some certainty, that that is the only way to make stuff work automatically/correctly for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) 21:42:31
@ss:someonex.netSomeoneSerge (back on matrix) * I think I'd vote pro propagation if we could say with some certainty, that that is the only way to guarantee correctness for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) 21:42:47
@ss:someonex.netSomeoneSerge (back on matrix) hexa (UTC+1): I've been playing with (s)ccache lately and I'm almost amazed we're not using it more widely when building derivations for tests rather for direct consumption 21:45:20
@ss:someonex.netSomeoneSerge (back on matrix) * hexa (UTC+1): I've been playing with (s)ccache lately and I'm almost amazed we're not using it more widely when building derivations for tests rather than for direct consumption 21:45:32
@ss:someonex.netSomeoneSerge (back on matrix)

not using it more widely

Which I'm guessing based on the fact that the infrastructure for this is rather lacking

21:46:18
@hexa:lossy.networkhexawhere would results be stored?21:46:47
@hexa:lossy.networkhexacaching of intermediate build steps would also be super helpful 😭21:48:49
@ss:someonex.netSomeoneSerge (back on matrix)Same as usual, derivation outputs would be stored in the nix store, they'd never be confused with the "pure" ones because they'd have different hashes. The (s)ccache directory would have to be set up on each builder21:48:52
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @hexa:lossy.network
caching of intermediate build steps would also be super helpful 😭
Yes but that's a much bigger refactor
21:49:04

Show newer messages


Back to Room ListRoom Version: 9