| 6 Feb 2025 |
ruro | * Regarding the remaining 21 eval errors:
-
13x cuda-samples depends on freeimage which is technically insecure. I am not 100% sure, if we should just filter all of the cuda-samples packages using the new filterPackagePredicates mechanism or if it might be better to do
freeimage.overrideAttrs { meta.knownVulnerabilities = [ ]; }
specifically for cuda-samples. Since cuda-samples isn't really "production-facing" anyway, so the users should only really care that the samples compile. The security risk of distributing sample code that technically has some vulnerabilities should be minimal.
-
colmap also depends on freeimage, this issue should be probably raised upstream
-
boxx and bpycv haven't been updated upstream in the last 11 months and they seem to not support any of the python versions that are currently supported in nixpkgs. So we should probably check in with the nixpkgs maintainer and remove these packages if they aren't required by something important.
-
pixinsight is (and always was) unfree, but it is explicitly listed in release-cuda.nix for some reason. Should it be removed?
-
tts because it depends on a -bin version of pytorch for some reason, which is "unfree" (bsd3 issl unfreeRedistributable). Is it possible to make it depend on a non-binary version of pytorch or should it be removed from release-cuda.nix?
-
mxnet is "actually" broken since #173463
-
truecrack-cuda is "actually" broken since #167250
-
pymc depends on pytensor is "actually" broken since #373239
| 06:57:53 |
ruro | * Regarding the remaining 21 eval errors:
-
13x cuda-samples depends on freeimage which is technically insecure. I am not 100% sure, if we should just filter all of the cuda-samples packages using the new filterPackagePredicates mechanism or if it might be better to do
freeimage.overrideAttrs { meta.knownVulnerabilities = [ ]; }
specifically for cuda-samples. Since cuda-samples isn't really "production-facing" anyway, so the users should only really care that the samples compile. The security risk of distributing sample code that technically has some vulnerabilities should be minimal.
-
colmap also depends on freeimage, this issue should be probably raised upstream
-
boxx and bpycv haven't been updated upstream in the last 11 months and they seem to not support any of the python versions that are currently supported in nixpkgs. So we should probably check in with the nixpkgs maintainer and remove these packages if they aren't required by something important.
-
pixinsight is (and always was) unfree, but it is explicitly listed in release-cuda.nix for some reason. Should it be removed?
-
tts because it depends on a -bin version of pytorch for some reason, which is "unfree" (bsd3 issl unfreeRedistributable). Is it possible to make it depend on a non-binary version of pytorch or should it be removed from release-cuda.nix?
-
mxnet is "actually" broken since #173463
-
truecrack-cuda is "actually" broken since #167250
-
pymc depends on pytensor is "actually" broken since #373239
| 07:00:28 |
SomeoneSerge (back on matrix) | Thanks for the summary!
tts because it depends on a -bin version of pytorch for some reason, which is "unfree" (bsd3 issl unfreeRedistributable). Is it possible to make it depend on a non-binary version of pytorch or should it be removed from release-cuda.nix?
Definitely shouldn't be removed, tts is a package we want maintained, and when it's broken we want to see it's broken. It was probably made to use torch-bin at some point when source build was broken? If we can move it to torch, we probably should.
colmap also depends on freeimage, this issue should be probably raised upstream
Indeed
13x cuda-samples depends on freeimage which is technically insecure. I am not 100% sure, if we should just filter all of the cuda-samples packages using ...
For the Hydra job we might as well allow the insecure freeimage? It's ok to test and cache it, we just don't want people to copy the allowInsecurePredicate configuration
| 09:30:54 |
SomeoneSerge (back on matrix) | Btw, at some point this list was used to build with allowUnfree = true instead of the more conservative allowUnfreePredicate we currently use | 09:32:13 |
ruro | Alternatively/additionally, we might want to mark torch-bin with the appropriate CUDA-specific license so that it passes the allowUnfreePredicate in release-cuda (assuming that the unfreeRedistributable part of torch-bin does indeed refer to the vendored(?) CUDA. | 13:07:01 |
ruro | I am not sure if I like the idea of adding freeimage to allowInsecurePredicate "globally" in release-cuda, as the eval failure is a useful indicator for when some package ends up depending on it. I was thinking of allowing freeimage specifically for cuda-samples. Also, it seems that cuda-samples is only present in CUDA versions <=12.3 for some reason. I wonder, why is that? | 13:18:26 |
| stick joined the room. | 14:20:42 |
stick | Hi team!
I recently managed to update vllm to latest version - https://github.com/NixOS/nixpkgs/pull/379165
I think we should add vllm to release-cuda because it takes long time to compile and it would be great if the nix-community cache was populated with the prebuilt binaries.
What do you think?
I created a PR with the change here: https://github.com/NixOS/nixpkgs/pull/379575 | 14:21:45 |
| stick left the room. | 14:25:25 |
| stick joined the room. | 14:26:27 |
| stick left the room. | 14:27:08 |
| stick joined the room. | 14:29:45 |
ruro | I have bad news, lol
sha=a1e849ff441fa1315afa27e1fd18c791f61de06b
for cuda_ver in 11_0 11_1 11_2 11_3 11_4 11_5 11_6 11_7 11_8 12_0 12_1 12_2 12_3; do
echo $cuda_ver;
NIXPKGS_ALLOW_UNFREE=1 NIXPKGS_ALLOW_INSECURE=1 nix build \
--no-link --print-out-paths --impure \
"github:NixOS/nixpkgs/$sha#cudaPackages_${cuda_ver}.cuda-samples" \
>${cuda_ver}.stdout 2>${cuda_ver}.stderr
echo $? > ${cuda_ver}.exit
done
All cudaPackages*.cuda-samples builds are currently failing for various reasons:
error: expected initializer before '__s128' in include/linux/types.h:12:27 for CUDA versions 11.0 - 11.3
cannot find -lcudadevrt: No such file or directory and -lcudart_static for CUDA versions 11.4 - 12.3
| 14:40:48 |
ruro | * I have bad news, lol
sha=a1e849ff441fa1315afa27e1fd18c791f61de06b
for cuda_ver in 11_0 11_1 11_2 11_3 11_4 11_5 11_6 11_7 11_8 12_0 12_1 12_2 12_3; do
echo $cuda_ver;
NIXPKGS_ALLOW_UNFREE=1 NIXPKGS_ALLOW_INSECURE=1 nix build \
--no-link --print-out-paths --impure \
"github:NixOS/nixpkgs/${sha}#cudaPackages_${cuda_ver}.cuda-samples" \
>${cuda_ver}.stdout 2>${cuda_ver}.stderr
echo $? > ${cuda_ver}.exit
done
All cudaPackages*.cuda-samples builds are currently failing for various reasons:
error: expected initializer before '__s128' in include/linux/types.h:12:27 for CUDA versions 11.0 - 11.3
cannot find -lcudadevrt: No such file or directory and -lcudart_static for CUDA versions 11.4 - 12.3
| 14:48:11 |
ruro | * I am not sure if I like the idea of adding freeimage to allowInsecurePredicate "globally" in release-cuda, as the eval failure is a useful indicator for when some package ends up depending on it. I was thinking of allowing freeimage specifically for cuda-samples. Also, it seems that cuda-samples is not present for CUDA 12.4 for some reason. I wonder, why is that? | 14:53:29 |
ruro | * I am not sure if I like the idea of adding freeimage to allowInsecurePredicate "globally" in release-cuda, as the eval failure is a useful indicator for when some package ends up depending on it. I was thinking of allowing freeimage specifically for cuda-samples. Also, it seems that cuda-samples is not present for CUDA 12.4 for some reason. I wonder, why is that? | 14:53:38 |
connor (burnt/out) (UTC-8) | I think the first one is related to compiler version and the second one may be static libraries for cuda_cudart not being in one of the default installed outputs
Although as a caveat I haven’t looked at Nixpkgs in a bit | 15:16:59 |
ruro | Yeah, it seems that a bunch of libraries are actually missing, not just the cudart_static. I am currently investigating. | 15:19:01 |
connor (burnt/out) (UTC-8) | I know I did this in the out of tree because it’s a pain in the ass to explicitly include the static libraries for cuda_cudart for CMake (for some unknowable reason they’re required when doing compiler identification): https://github.com/ConnorBaker/cuda-packages/blob/dd0266aece12e5177e3ce32d62b6665c33847837/modules/redists/cuda/overrides/common/cuda_cudart.nix#L11
But generally to reduce the build time closure static libraries aren’t pulled in by default | 15:19:06 |
connor (burnt/out) (UTC-8) | Might have to check the CMake for the CUDA samples — it’s possible they’re doing only static builds or preferentially linking against static libraries | 15:19:58 |
| djacu joined the room. | 15:49:57 |
ruro | This might be a stupid question, but when the nixpkgs manual says
All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs.
What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).
| 16:54:13 |
ruro | * This might be a stupid question, but when the nixpkgs manual says
All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit
does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs.
What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).
| 16:54:19 |
ruro | * This might be a stupid question, but when the nixpkgs manual says
All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit
does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs?
What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).
| 16:54:58 |
SomeoneSerge (back on matrix) | Honestly, I've no idea what license, if any, applies to torch-bin | 17:24:28 |
SomeoneSerge (back on matrix) | Yes, or we could just agree that testing for insecure dependencies is out of scope for hydra | 17:26:09 |
SomeoneSerge (back on matrix) | I expect static and devrt to be in .dev's propagatedBuildInputs | 17:27:54 |
SomeoneSerge (back on matrix) |
What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required
If upstream is co-operative, they need to be contacted and offered a proper solution like FindCUDAToolkit.cmake without any CUDA_PATHs or merged-layout assumptiosn
| 17:29:22 |
zopieux | thanks for looking. Sadly, I have now compiled the package myself so it's cached and this doesn't say anything useful. I suppose I can try next time I update. What do you except out of emptying builders? | 17:29:50 |
SomeoneSerge (back on matrix) |
does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs?
That's the idea. We could consider an automation more along the lines of propagatedBuildInputs, but symlinks we hope to avoid, because it's hard to prune the references to static libraries after the build
| 17:31:10 |