| 31 Oct 2024 |
connor (burnt/out) (UTC-8) | I have no idea if it’ll work, but please do try and let me know! I’m not familiar with the Jetson zoo, or how the binary wheels are packaged.
It sounds reasonable — the wheel should have libraries built to target Jetson devices which link against CUDA libraries, and using autoPatchelfHook and including CUDA libraries in buildInputs should patch the libraries so they resolve to the ones provided by Nixpkgs…
I don’t fully understand what cuda_compat does, but my understanding is that it serves as a shim between newer CUDA libraries and an older CUDA driver on the Jetson?
At any rate, try it out and let me know! | 16:16:51 |
Frederik Semmel | Ok awesome, thanks for the quick reply! At least there is not something completely wrong with the approach. I will let you know if I can make it work 👌 Did anyone else get onnxruntime working with the tensorrt execution provider on a jetson? | 16:24:25 |
connor (burnt/out) (UTC-8) | I can’t remember if I managed to get it building on Jetson, I got distracted and started doing work on the nix interpreter | 16:26:15 |
connor (burnt/out) (UTC-8) | Also, which Jetson generation are you using? Please let it be at least Xavier :( | 16:27:48 |
Frederik Semmel | Its the Orin NX luckily :) I am free to choose a different platform, I am still evaluating what is best for our AI on the edge usecase. From what you say it seems like its not going to be impossible, so I will work my way through it. I hope I can find a a good solution, if I do I will try to contribute it to nixpkgs or jetpack-nixos | 16:31:01 |
search-sense | I am sorry ``nix run github:SomeoneSerge/pkgs#pkgsCuda.some-pkgs-py.stable-diffusion-webui``` doesn't work anymore | 17:06:45 |
search-sense | * I am sorry nix run github:SomeoneSerge/pkgs#pkgsCuda.some-pkgs-py.stable-diffusion-webui doesn't work anymore | 17:07:06 |