26 Sep 2024 |
Guilhem | it's such a joke that I find it sad it was not opened one day earlier | 17:28:20 |
Gaétan Lepage | "I propose a 200€ bounty for this PR. Please git tag the freaking commit. | 21:09:04 |
Gaétan Lepage | * "I propose a 200€ bounty for this PR. Please git tag the freaking commit." | 21:09:07 |
Gaétan Lepage | The ease of spinning up a release is a decreasing function of the project/company resources. | 21:09:40 |
Guilhem | same issue on a one-man project abandonned for the last year or so: https://github.com/bab2min/EigenRand/issues/56 | 21:47:05 |
Guilhem | * same issue on a one-man project abandonned for the last year or so: https://github.com/bab2min/EigenRand/issues/56 : <48h | 21:49:56 |
28 Sep 2024 |
| shekhinah changed their profile picture. | 07:04:58 |
| kaya changed their profile picture. | 16:55:46 |
1 Oct 2024 |
| -_o joined the room. | 21:00:15 |
2 Oct 2024 |
hexa (UTC+1) | Gaétan Lepage: please take care of tensordict | 00:25:19 |
hexa (UTC+1) | Download image.png | 00:25:22 |
Gaétan Lepage | Sure, I will have a look right now.
I have not faced any failure on my end, weird... | 06:21:33 |
Gaétan Lepage | Is this on staging ? | 06:23:26 |
Gaétan Lepage | All failures that I was able to find on hydra are timeouts or upstream dependency failures.
I was able to build tensordict on all architectures... | 07:05:50 |
hexa (UTC+1) | this is on trunk | 11:03:39 |
hexa (UTC+1) | then you probably need to increase meta.timeout | 11:04:00 |
Gaétan Lepage | Now that you say it, I remember this package being stuck (indefinitly) during mass rebuilds.
I don't know if increasing the timeout will help. When everything works fine, it builds in ~1min...
Also, nothing has changed in the derivation for the past few months. | 11:47:12 |
Kevin Mittman | Back from vacation | 18:23:19 |
Kevin Mittman | Redacted or Malformed Event | 18:32:05 |
Kevin Mittman | In reply to @ss:someonex.net Kevin Mittman Hi! Do you know how dcgm uses cuda and why it has to link several versions? See libdcgm_cublas_proxy${cudaMajor}.so | 18:34:06 |
Kevin Mittman | In reply to @connorbaker:matrix.org Kevin Mittman: does NVIDIA happen to have JSON (or otherwise structured) versions of their dependency constraints for packages somewhere, or are the tables on the docs for each respective package the only source? I'm working on update scripts and I'd like to avoid the manual stage of "go look on the website, find the table (it may have moved), and encode the contents as a Nix expression" Not really ... wishlist for future. Which product / component is this? | 18:46:04 |
Kevin Mittman | SomeoneSerge (utc+3): seems like reply got stuck in a thread | 18:46:54 |
3 Oct 2024 |
connor (he/him) (UTC-7) | In reply to @justbrowsing:matrix.org Not really ... wishlist for future. Which product / component is this? That particular request was born out of frustration with TensorRT.
Any idea why the support matrix for TensorRT says only CUDNN 8.9.7 is supported (https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html) but the 24.09 container is shipping it with CUDNN 9.4 (https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)? | 05:21:35 |
Kevin Mittman |
- TRT 8.x depends on cuDNN 8.x (last release was 8.9.7)
- TRT 10.x has optional support for cuDNN (not updated for 9.x)
- The DL frameworks container image is more generic
| 17:09:43 |
4 Oct 2024 |
connor (he/him) (UTC-7) | I know that when packaging TRT (any version) for Nixpkgs, it autopatchelf flags a dependency on cuDNN, so we need to link against it.
Does TRT 10.x not work with cuDNN 9.x at all, or is it not an officially supported combination?
onnxruntime, for example, says for CUDA 11.8 to use TRT 10.x with cuDNN 8.9.x, and with CUDA 12.x to use TRT 10.x with cuDNN 9.x. The latter combination wasn’t in the support matrix, so I was surprised.
For the DL frameworks container, does that mean TRT comes without support for cuDNN since it’s not an 8.9.x release, that it’s not officially supported (per the TRT support matrix), or something else? | 15:21:59 |
connor (he/him) (UTC-7) | * I know that when packaging TRT (any version) for Nixpkgs, autopatchelf flags a dependency on cuDNN, so we need to link against it.
Does TRT 10.x not work with cuDNN 9.x at all, or is it not an officially supported combination?
onnxruntime (not an NVIDIA product but a large use case for TRT), for example, says for CUDA 11.8 to use TRT 10.x with cuDNN 8.9.x, and with CUDA 12.x to use TRT 10.x with cuDNN 9.x. The latter combination wasn’t in the support matrix, so I was surprised.
For the DL frameworks container, does that mean TRT comes without support for cuDNN since it’s not an 8.9.x release, that it’s not officially supported (per the TRT support matrix), or something else? | 15:22:49 |
connor (he/him) (UTC-7) | I don’t have an easy way to test all the code paths different libraries could use to call into TRT or know which parts are accelerated with cuDNN but I am trying to make sure the latest working version of cuDNN is supplied to TRT in Nixpkgs :/ | 15:24:20 |
connor (he/him) (UTC-7) | Jetson, for example, doesn’t have a cuDNN 8.9.6 release anywhere I can find (tarball or Debian installer), but that’s the version TRT has in the support matrix for the platform, so I’ve been using 8.9.5 (which does has a tarball for Jetson). | 15:25:44 |
5 Oct 2024 |
Gaétan Lepage | Hey guys !
I noticed that the CUDA features of tinygrad were not working on non-NixOS linux systems. | 10:22:50 |
Gaétan Lepage | More precisely it can't find the libnvrtc.so lib. | 10:23:41 |