NixOS CUDA | 310 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 60 Servers |
| Sender | Message | Time |
|---|---|---|
| 27 Jun 2024 | ||
| * They don't seem to specify any constraints: https://github.com/archibate/OpenSYCL/blob/b919667ea53f99dbc55a9832f297cf0cb689034e/cmake/FindCUDA.cmake#L31 (oh, this is some fork) | 11:02:44 | |
| my issue seems to be the packaged clang version in the nixpkg opensycl package, i'll probably simply repackage it. | 11:16:02 | |
Does anybody get cicc died due to signal 9 (Kill signal) | 12:19:18 | |
| When trying to build onnxruntime with cuda support? | 12:19:24 | |
In reply to @connorbaker:matrix.orgSame.. That sounds very familiar | 12:21:21 | |
| https://github.com/rapidsai/cudf/issues/8018 | 12:21:36 | |
| seems like the solution to cicc getting killed is to limit parallelism also | 12:21:48 | |
In reply to @ss:someonex.net I seem to be too dumb to figure this out. I use this flake:
and expect both the shell and the sycl package to use the correct version, this clearly is not the case though. I expect that I can set it the way the documentation leads me to believe for the sycl package, as I use the callPackage function, but how would I do the same for the shell?
sorry if this is too incoherent or stupid to begin with | 13:17:07 | |
In reply to @ss:someonex.net* I seem to be too dumb to figure this out. I use this flake:
and expect both the shell and the sycl package to use the correct version, this clearly is not the case though. I expect that I can set it the way the documentation leads me to believe for the sycl package, as I use the callPackage function, but how would I do the same for the shell?
sorry if this is too incoherent or stupid to begin with | 13:17:20 | |
There's no config.cudaPackages option; cudaPackages is an attribute in an evaluated pkgs instance; it can be overridden using overlays | 13:41:48 | |
coruscate is there a public repo with the flake and the opensycl.nix? | 13:42:37 | |
Looking at how setup-cuda-hook.sh propagates itself and once again I cannot remember where the extra offset of (0, 1) is coming from... | 21:28:55 | |
Say someone puts a cuda_cudart in buildInputs (that's (0,1)), it has the hook in propagatedNativeBuildInputs (that's (-1, 0), right?). The expectation (confirmed by a successful build is that we arrive at (-1, 0)=nativeBuildInputs again, but the arithmetics says (0, 0) | 21:31:06 | |
* Looking at how setup-cuda-hook.sh propagates itself and once again I cannot remember where the extra offset of (1, 0) is coming from... | 21:31:28 | |
Is it added manually? Hmm now come to think of it if a is in buildInputs, and has b in propagatedBuildInputs, which has c in propagatedBuildINputs - c should end up at the same offsets, i.e. in buildInputs | 21:32:31 | |
* Is it added manually? Hmm now come to think of it if a is in buildInputs, and has b in propagatedBuildInputs, which has c in propagatedBuildINputs - we want c to end up at the same offsets, i.e. in buildInputs | 21:32:46 | |
| 28 Jun 2024 | ||
| Should work this time: https://github.com/NixOS/nixpkgs/pull/323056 Can I bump a nixpgks-review? xD | 01:43:56 | |
| * Should work this time: https://github.com/NixOS/nixpkgs/pull/323056 Can I bum a nixpgks-review? xD | 01:44:00 | |
In reply to @ss:someonex.netOmg, nevermind... I checked that magma still builds after the first commit, then did something in the second and now it doesn't | 01:49:25 | |
| 02:44:51 | ||
In reply to @matthewcroughan:defenestrate.itI know, that it's broken ... actually it would be good if someone upgrade it to the current version TensorRT-10.1.0.27 | 11:00:16 | |
Shoot, I think propagatedBuildOutputs are broken with __structuredAttrs | 11:08:29 | |
The hook loops over $propagatedBuildOutputs but __structuredAttrs make it onti an array, so the first expression resolves into the value of the first element 🤡 | 11:09:03 | |
In reply to @search-sense:matrix.orgWould you like to just take over tensorrt in Nixpkgs? | 11:28:58 | |
In reply to @ss:someonex.netYay | 12:07:56 | |
Download clipboard.png | 12:08:03 | |
| 12:52:05 | ||
In reply to @ss:someonex.netI wouldn't wish that on my worst enemy | 13:00:27 | |
| Hey! I just started using NixOS and I love it but have a MAJOR blocker, as I'm maintaining a FOSS deep learning package and can't get CUDA to work :( I would really love to continue on this journey and also eventually contribute to this community here, but right now it feels like I just shot myself in the foot badly, as I've spent the last days exclusively configuring NixOS only to reach a point which is seemingly unsurmountable for me.. The issue seems to be that PyTorch doesn't find the CUDA driver and what's also weird is that The thing is that in order to work with my collaborators, I need to work in a non NixOS way, in my case I would like to use pixi which is very much like conda/micromamba, just better.. Therefore, I'm trying to get things working in an FHS shell. Does one of you have an idea? Am I doing anything obvious wrong? from my configuration.nix
{ pkgs, unstable }: let
'';
| 13:00:34 | |
| * Hey! I just started using NixOS and I love it but have a MAJOR blocker, as I'm maintaining a FOSS deep learning package and can't get CUDA to work :( I would really love to continue on this journey and also eventually contribute to this community here, but right now it feels like I just shot myself in the foot badly, as I've spent the last days exclusively configuring NixOS only to reach a point which is seemingly unsurmountable for me.. The issue seems to be that PyTorch doesn't find the CUDA driver and what's also weird is that The thing is that in order to work with my collaborators, I need to work in a non NixOS way, in my case I would like to use pixi which is very much like conda/micromamba, just better.. Therefore, I'm trying to get things working in an FHS shell. Does one of you have an idea? Am I doing anything obvious wrong? from my configuration.nix
pixi-fhs.nix
export LD_LIBRARY_PATH=${cudatoolkit}/lib:${cudatoolkit}/lib64:${cudatoolkit}/lib64/stubs''${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH export CPLUS_INCLUDE_PATH="${cudatoolkit}/include''${CPLUS_INCLUDE_PATH:+:$CPLUS_INCLUDE_PATH}" Pixi completion -- not working yet, due to missing
| 13:00:55 | |