| 20 Nov 2025 |
ser(ial) | i have a Debian host with nvidia gpu which runs incus and in incus i have nixos containers. how can i utilise cuda programs in such container? | 10:24:20 |
| plan9better joined the room. | 12:41:04 |
SomeoneSerge (back on matrix) | Hi. How do you use cuda in a non-NixOS container with Incus? Does it use CDI? | 13:22:58 |
ser(ial) | with debian container i use built-in incus "nvidia.runtime" which passes the host NVIDIA and CUDA runtime libraries into the instance | 13:30:32 |
ser(ial) | but nixos naturally does not seek for these libraries in that place | 13:31:15 |
ser(ial) | does it mean that i need full libraries in nixos container which are with identical version as on debian host? | 13:32:26 |
connor (burnt/out) (UTC-8) | GaƩtan Lepage: I've got to package ONNX/ONNX Runtime/ONNX TensorRT for C++; if I upstream the PR do you think you'd have the bandwidth to look at it? I'd likely follow what I did here: https://github.com/ConnorBaker/cuda-packages/tree/8a317116a07717b13e0608f47b78bd6d75f8bb99/pkgs/development/libraries That is, the sort of cursed double-build in a single derivation which produces both the C++ binaries and a python wheel, so the python3Packages entry essentially turns into installing a wheel. | 14:04:07 |