!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

289 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
2 Oct 2024
@justbrowsing:matrix.orgKevin Mittman (UTC-8)Back from vacation18:23:19
@justbrowsing:matrix.orgKevin Mittman (UTC-8)Redacted or Malformed Event18:32:05
@justbrowsing:matrix.orgKevin Mittman (UTC-8)
In reply to @ss:someonex.net
Kevin Mittman Hi! Do you know how dcgm uses cuda and why it has to link several versions?
See libdcgm_cublas_proxy${cudaMajor}.so
18:34:06
@justbrowsing:matrix.orgKevin Mittman (UTC-8)
In reply to @connorbaker:matrix.org
Kevin Mittman: does NVIDIA happen to have JSON (or otherwise structured) versions of their dependency constraints for packages somewhere, or are the tables on the docs for each respective package the only source? I'm working on update scripts and I'd like to avoid the manual stage of "go look on the website, find the table (it may have moved), and encode the contents as a Nix expression"
Not really ... wishlist for future. Which product / component is this?
18:46:04
@justbrowsing:matrix.orgKevin Mittman (UTC-8) SomeoneSerge (utc+3): seems like reply got stuck in a thread 18:46:54
3 Oct 2024
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)
In reply to @justbrowsing:matrix.org
Not really ... wishlist for future. Which product / component is this?
That particular request was born out of frustration with TensorRT.
Any idea why the support matrix for TensorRT says only CUDNN 8.9.7 is supported (https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html) but the 24.09 container is shipping it with CUDNN 9.4 (https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)?
05:21:35
@justbrowsing:matrix.orgKevin Mittman (UTC-8)
  • TRT 8.x depends on cuDNN 8.x (last release was 8.9.7)
  • TRT 10.x has optional support for cuDNN (not updated for 9.x)
  • The DL frameworks container image is more generic
17:09:43
4 Oct 2024
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) I know that when packaging TRT (any version) for Nixpkgs, it autopatchelf flags a dependency on cuDNN, so we need to link against it.
Does TRT 10.x not work with cuDNN 9.x at all, or is it not an officially supported combination?
onnxruntime, for example, says for CUDA 11.8 to use TRT 10.x with cuDNN 8.9.x, and with CUDA 12.x to use TRT 10.x with cuDNN 9.x. The latter combination wasn’t in the support matrix, so I was surprised.
For the DL frameworks container, does that mean TRT comes without support for cuDNN since it’s not an 8.9.x release, that it’s not officially supported (per the TRT support matrix), or something else?
15:21:59
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)* I know that when packaging TRT (any version) for Nixpkgs, autopatchelf flags a dependency on cuDNN, so we need to link against it.
Does TRT 10.x not work with cuDNN 9.x at all, or is it not an officially supported combination?
onnxruntime (not an NVIDIA product but a large use case for TRT), for example, says for CUDA 11.8 to use TRT 10.x with cuDNN 8.9.x, and with CUDA 12.x to use TRT 10.x with cuDNN 9.x. The latter combination wasn’t in the support matrix, so I was surprised.
For the DL frameworks container, does that mean TRT comes without support for cuDNN since it’s not an 8.9.x release, that it’s not officially supported (per the TRT support matrix), or something else?15:22:49
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I don’t have an easy way to test all the code paths different libraries could use to call into TRT or know which parts are accelerated with cuDNN but I am trying to make sure the latest working version of cuDNN is supplied to TRT in Nixpkgs :/15:24:20
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Jetson, for example, doesn’t have a cuDNN 8.9.6 release anywhere I can find (tarball or Debian installer), but that’s the version TRT has in the support matrix for the platform, so I’ve been using 8.9.5 (which does has a tarball for Jetson).15:25:44
5 Oct 2024
@glepage:matrix.orgGaétan Lepage Hey guys !
I noticed that the CUDA features of tinygrad were not working on non-NixOS linux systems.
10:22:50
@glepage:matrix.orgGaétan Lepage More precisely it can't find the libnvrtc.so lib. 10:23:41
@glepage:matrix.orgGaétan LepageDo I need to run it using nixGL ?10:23:49
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @glepage:matrix.org
Do I need to run it using nixGL ?
No, just add ${getLib cuda_nvrtc}/lib to the search path
11:01:19
@ss:someonex.netSomeoneSerge (back on matrix)I'm en route, will reply to stuff on tuesday11:02:56
@glepage:matrix.orgGaétan LepageOk, I will try that thanks !11:31:16
7 Oct 2024
@ironbound:hackerspace.pl@ironbound:hackerspace.pl left the room.13:26:32
@glepage:matrix.orgGaétan Lepage

Hi,
I think that tinygrad is missing some libraries because I can get it to crash at runtime with:

Error processing prompt: Nvrtc Error 6, NVRTC_ERROR_COMPILATION
<null>(3): catastrophic error: cannot open source file "cuda_fp16.h"
  #include <cuda_fp16.h>
20:24:15
@glepage:matrix.orgGaétan Lepage Currently, we already patch the path to libnvrtc.so and libcuda.so, but maybe we should make the headers available too. 20:26:24
@glepage:matrix.orgGaétan Lepage * Currently, we already patch the path to libnvrtc.so and libcuda.so, but maybe we should make the headers available too. 20:35:11
@aidalgol:matrix.orgaidalgol What is that doing that a missing header is a runtime error? 20:44:05
@glepage:matrix.orgGaétan Lepage I think that tinygrad is compiling cuda kernels at runtime. 20:46:22
@glepage:matrix.orgGaétan Lepage That's why this missing kernel causes a crash when using the library.
tinygrad is entirely written in python and is thus itself not compiled at all.
20:46:55
@glepage:matrix.orgGaétan LepageThis is for sure quite unusual and this is why I am not sure how to make this header available "at runtime"...20:48:31
@aidalgol:matrix.orgaidalgol I figured it must be something like that. I think any library derivation should be providing the headers in the derivation's dev. Given the error message shows a #include line, and with the system header brackets, seems we need to pass a path to the tinygrad build somehow. 21:14:39
@aidalgol:matrix.orgaidalgol Does whatever nvidia compiler it's using have an equivalent to -isystem for gcc? 21:16:18
@glepage:matrix.orgGaétan Lepage Yes you are right. In the meantime, I found out that cuda_fp16.h is provided by cudaPackaged.cuda_cudart.dev 21:18:16
@glepage:matrix.orgGaétan Lepage The issue is that the way they invoke the compiler is a bit obscure: https://github.com/search?q=repo%3Atinygrad%2Ftinygrad%20nvrtcGetCUBIN&type=code 21:19:30
@glepage:matrix.orgGaétan Lepage I think that the closest examples within nixpkgs are cupy and numba.
But both of them operate this a bit differently.
21:20:06

Show newer messages


Back to Room ListRoom Version: 9