SomeoneSerge (matrix works sometimes) | RE: https://discourse.nixos.org/t/why-does-nixpkgs-config-exist/61456/6 RE: wrappers, stdenv, etc. RE: "The easiest way to achieve consistency today is to just enable cudaSupport Nixpkgs-wide using config"
And yet again, I don't think it has to be this way. It's not inherent, it's just a trait of callPackage/makeScope and of overrides being local. To try and defend the notion of the hypothetical stdenv.cudaSupport, if all of a package's inputs consistently accepted stdenv as a parameter, and could be entirely configured just using stdenv, then all it'd take to achieve consistency is to apply .override { inherit stdenv; } to all *Inputs. And if we didn't limit ourselves to the idea of callPackage and that there's a global "container" from which we "fill in" our parameters, that could probably be automated (by doing dynamic scoping essentially)
But ok let's leave callPackage and scoping aside. This still made me think of how to layer nvcc-wrapper and the conditional stdenv functionality. I'm back to the notion that we actually can move cudaSupport to (the platform attrsets and) stdenv, replace optionals cudaSupport with optionals stdenv.cudaSupport, and implement cudaStdenv.mkDerivation in such a way that it would be able to downgrade the compiler if it detects that its argument is capable of using CUDA functionality. The detection could still be done in two ways: we could demand that derivations explicitly pass an argument such as, say, cudaCapable = true, or we could detect that nvcc is in its nativeBuildInputs. I'd vote for the former because llvm, nvc++, and it's more explicit too. The cudaStdenv constructor could be made configurable so that it's possible to disable the downgrading (e.g. when we don't care about consistency or the LTO and we just trust in the wrapper; or if llvm offers more relaxed constraints). And regardless of the stdenv layer, the lower-level nvcc wrapper can exist and be used independently.
CC connor (he/him) (UTC-8)
| 10:09:54 |