18 Mar 2025 |
stick | looking at versions of tensorrt and pypi - only these two are affected:
8.6.1.6 vs 8.6.1
10.3.0.26 vs 10.3.0 | 20:03:39 |
stick | because 8.5.3.1 uses the same version in pypy (including the .1) and older versions are not present on pypi at all | 20:04:37 |
Michal Koutenský | guess i can submit a PR that conditionally changes the filename for those two | 20:06:15 |
Michal Koutenský | thanks for your help! | 20:06:40 |
stick | that would work, ping me in the PR, thanks | 20:07:02 |
stick | you would just override the version here:
https://github.com/NixOS/nixpkgs/blob/59b1aef59071cae6e87859dc65de973d2cc595c0/pkgs/development/python-modules/tensorrt/default.nix#L16 | 20:07:24 |
Michal Koutenský | not in the unpack phase? | 20:07:53 |
stick | i guess the question is what version is being reported by the wheel | 20:09:08 |
stick | try calling
import tensorrt print(tensorrt.version)
| 20:10:01 |
stick | * try calling
import tensorrt
print(tensorrt.__version__)
| 20:10:10 |
stick | or
import tensorrt
from importlib.metadata import version
version('tensorrt')
| 20:11:00 |
stick | if you are feeling adventurous, you might as well update pkgs/development/cuda-modules/tensorrt/releases.nix to contain the latest 10.9.x.y release | 20:12:02 |
stick | * if you are feeling adventurous, you might as well update pkgs/development/cuda-modules/tensorrt/releases.nix to contain the latest 10.9.x.y release (both for cuda 11.x and 12.x) | 20:12:16 |
Michal Koutenský | both report '10.3.0' for me | 20:13:44 |
Michal Koutenský | sure, i can do that tomorrow | 20:14:26 |
stick | in that case it makes sense to update version on line 16, not the unpack phase | 20:15:01 |
stick | so the nix package has the same version as reported by wheel | 20:15:18 |
Michal Koutenský | yeah that makes sense | 20:15:26 |
stick | SomeoneSerge (UTC+U[-12,12]): I finally managed to build onnxruntime with CUDA 12.8 - see https://github.com/NixOS/nixpkgs/pull/390885
can you run the CUDA version bump test suite and let me know if we can merge the PR?
| 22:20:53 |
stick | * SomeoneSerge (UTC+U[-12,12]): I finally managed to build onnxruntime (afaik the only blocker) with CUDA 12.8 - see https://github.com/NixOS/nixpkgs/pull/390885
can you run the CUDA version bump test suite and let me know if we can merge the PR?
| 22:21:20 |
stick | I am building magma/torch/vllm as we speak - but IIRC this already went OK when i tried it few weeks ago | 22:21:59 |
stick | the onnxruntime fix was to turn off LTO - it went bonkers (into infinite loop) when trying to do the final link - I looked into gentoo ebuilds and they also turn off LTO when linking onnxruntime with cuda | 22:24:01 |
SomeoneSerge (UTC+U[-12,12]) | stick: just a heads up: I've been using the university workstation for nixpkgs-review so far, but I'm no longer employed by the uni and am migrating between infrastructures; going to take time 🤷 | 22:24:51 |
stick | so the version bump test is only running nixpkgs-review on the PR? | 22:25:57 |
SomeoneSerge (UTC+U[-12,12]) | There's, nixpgks-review with cudaSupport=true, there's passthru gpuChecks, there's samuela's and Connor's out-of-tree test-suites | 22:27:50 |
SomeoneSerge (UTC+U[-12,12]) | Nixpkgs-review is a bit of cargo culting but it gives an idea of the size of the fallout | 22:29:14 |
stick | yeah, i use it often locally - but there are many failing packages also on master unfortunately | 22:30:00 |
stick | * yeah, i use it often locally - but there are many not-so-important failing packages also on master unfortunately | 22:30:20 |
SomeoneSerge (UTC+U[-12,12]) | Yes, it's definitely too much compute hours spent on just getting a statistic (# failures) | 22:38:34 |
21 Mar 2025 |
| Domen Kožar changed their profile picture. | 11:40:19 |