!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

211 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda42 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
9 Jul 2024
@ornx:littledevil.clubornxi can just merge that PR into a local nixpkgs if that's the fix00:46:50
@ss:someonex.netSomeoneSerge (utc+3) You can try running your program with the LD_DEBUG=libs environment variable 00:47:07
@ss:someonex.netSomeoneSerge (utc+3) If it mentions libcuda.so from this cudatoolkit link farm, it's the stub driver issue, and the solution is to just not use the link farm 00:47:36
@ss:someonex.netSomeoneSerge (utc+3) * If it mentions libcuda.so from this cudatoolkit link farm, it's the stub driver issue, and the solution is to just not use the link farm (take individual components from https://github.com/NixOS/nixpkgs/blob/7a95a8948b9ae171337bbf2794459dbe167032ed/pkgs/development/cuda-modules/cudatoolkit/redist-wrapper.nix#L44-L58) 00:52:52
@ghishadow:matrix.orgghishadow joined the room.04:21:50
@hacker1024:matrix.orghacker1024How's everyone's day going? Mine was great until my colleague asked me to package [this](https://github.com/jocover/jetson-ffmpeg/blob/master/CMakeLists.txt)06:21:10
@hacker1024:matrix.orghacker1024It is times like this that make me question my values to the core06:21:39
@ss:someonex.netSomeoneSerge (utc+3)

/usr/src/jetson_multimedia_api/samples/common/classes/NvBuffer.cpp

That's a good start

06:23:50
@ss:someonex.netSomeoneSerge (utc+3) *

/usr/src/jetson_multimedia_api/samples/common/classes/NvBuffer.cpp

That's a promising opening

06:24:01
@ss:someonex.netSomeoneSerge (utc+3)These are only distributed with the jetpack, right?06:29:34
@ss:someonex.netSomeoneSerge (utc+3)Redacted or Malformed Event06:35:01
@hacker1024:matrix.orghacker1024Yep, luckily Jetpack-NixOS has all the samples packages06:49:01
@hacker1024:matrix.orghacker1024* Yep, luckily Jetpack-NixOS has all the samples packaged06:49:06
@hacker1024:matrix.orghacker1024Just needs some overlay weirdness to use CUDA from Nixpkgs now06:49:37
@hacker1024:matrix.orghacker1024Speaking of which, is tensorrt supposed to work on aarch64? Because it's evaluating as both broken and unsupported ` nix-instantiate -I nixpkgs=channel:nixos-unstable '<nixpkgs>' --argstr localSystem aarch64-linux --arg config '{ cudaSupport = true; allowUnfree = true; }' -A cudaPackages.tensorrt`06:50:38
@hacker1024:matrix.orghacker1024* Speaking of which, is tensorrt supposed to work on aarch64? Because it's evaluating as both broken and unsupported when running the following `nix-instantiate -I nixpkgs=channel:nixos-unstable '<nixpkgs>' --argstr localSystem aarch64-linux --arg config '{ cudaSupport = true; allowUnfree = true; }' -A cudaPackages.tensorrt`06:50:57
@ss:someonex.netSomeoneSerge (utc+3)Not sure, tensorrt isn't receiving enough love:)07:11:44
@ss:someonex.netSomeoneSerge (utc+3) https://github.com/NixOS/nixpkgs/issues/323124 07:12:14
@ss:someonex.netSomeoneSerge (utc+3) Jonas Chevalier hexa (UTC+1) a question about release-lib.nix: my impression is that supportedPlatforms is the conventional way to describe a "matrix" of jobs; for aarch64-linux, I'd like to define a matrix over individual capabilities because aarch64-linux mostly means embedded/jetson SBCs; currently this means importing nixpkgs with different config.cudaCapabilities values... any thoughts on how to express this in a not-too-ad-hoc way? 18:07:33
@connorbaker:matrix.orgconnor (he/him) (UTC-7)

Kevin Mittman: is there any reason the TensorRT tarball exploded in size for the 10.2 release? It's clocking in at over 4GB, nearly twice the size it was for 10.1 (~2GB).

[connorbaker@nixos-desktop:~/cuda-redist-find-features]$ ./tensorrt/helper.sh 12.5 10.2.0.19 linux-x86_64
[582.9/4140.3 MiB DL] downloading 'https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.2.0/tars/TensorRT-10.2.0.19.Linux.x86_64-gnu.cuda-12.5.tar.gz'
18:56:10
@connorbaker:matrix.orgconnor (he/him) (UTC-7)The SBSA package only increased from 2398575314 to 2423326645 bytes (so still about 2GB)18:57:16
@justbrowsing:matrix.orgKevin Mittman There's two CUDA variants, so it's more like 8GB total. The static .a is 3GB! Asked the same and "many new features" 19:45:41
10 Jul 2024
@zimbatm:numtide.comJonas Chevalier
In reply to @ss:someonex.net
Jonas Chevalier hexa (UTC+1) a question about release-lib.nix: my impression is that supportedPlatforms is the conventional way to describe a "matrix" of jobs; for aarch64-linux, I'd like to define a matrix over individual capabilities because aarch64-linux mostly means embedded/jetson SBCs; currently this means importing nixpkgs with different config.cudaCapabilities values... any thoughts on how to express this in a not-too-ad-hoc way?

Patch release-lib.nix to add this logic:

nixpkgsArgs' = if builtins.isFunction nixpkgsArgs then nixpkgsArgs else (system: nixpkgsArgs);

And then replace all the nixpkgsArgs usages with (nixpkgsArgs' system)

10:03:33
@oak:universumi.fioak changed their profile picture.20:21:23
11 Jul 2024
@ss:someonex.netSomeoneSerge (utc+3)openai-triton broken with cuda+python3.12 😩00:52:55

Show newer messages


Back to Room ListRoom Version: 9