Sender | Message | Time |
---|---|---|
19 Oct 2024 | ||
hacker1024 | They have long lists of errata | 12:57:52 |
hexa (UTC+1) | yeah, worth a try | 12:58:00 |
hexa (UTC+1) | can always roll back | 12:58:03 |
hexa (UTC+1) | thx | 12:58:58 |
connor (he/him) (UTC-7) | I keep forgetting I added myself as a maintainer to glibc until I get emails for reviews lmao | 19:09:59 |
connor (he/him) (UTC-7) | SomeoneSerge (utc+3) Gaétan Lepage thoughts on having backendStdenv automatically propagate autoAddDriverRunpath and autoPatchelfHook ? I feel like forgetting to add the former is a footgun people keep firing, and the latter is a great check to make sure all your dependencies are either present or explicitly ignored. | 19:22:47 |
Gaétan Lepage | I am not sure to be qualified to answer properly. From my point of view, these kind of automations indeed help and avoid sneaky mistakes. | 19:24:29 |
hexa (UTC+1) | hacker1024: I think your recommendation was spot on | 21:27:27 |
hexa (UTC+1) | at 22% I see the first [pt_main_thread] instances | 21:27:36 |
hexa (UTC+1) | and they don't seem to crash with microcode updates applied | 21:27:50 |
hexa (UTC+1) | wow, I hope that makes python-updates much smoother in the future | 21:28:06 |
SomeoneSerge (utc+3) |
True
Yes and no. Yes because that'd definitely make one-off and our own contributions easier. No because once we start propagating it we lose the knowledge of which packages actually need to be patched. It still seems to me that most packages we don't have to patch because they call cudart and cudart is patchelfed. Maybe yes because I'm unsure what happens with libcudart_static.
I'd be rather strongly opposed to this one. Autopatchelf is a huge hammer, coarse and imprecise. It can actually erase correct runpaths from an originally correct binary. Let's reserve it for non Another important thing to consider is (here we go again) whether we want to keep both backendStdenv and the hook and which of these things should be propagating what | 21:29:49 |
SomeoneSerge (utc+3) | Even right now there's something cursed going on that e.g. triggers a "rebuild" of libcublas when you override nvcc, and I think that happens because of the propagated hook. That at the least is a surprising behaviour | 21:32:25 |
SomeoneSerge (utc+3) | Another "no" for autoAddDriverRunpath is that it's also just not enough anyway, because of python and all other sorts of projects where we're forced to use wrappers instead | 21:38:08 |
SomeoneSerge (utc+3) | I think I'd vote pro propagation if we could say with some certainty, that that is the only way to make stuff work automatically for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) | 21:41:59 |
SomeoneSerge (utc+3) | * I think I'd vote pro propagation if we could say with some certainty, that that is the only way to make stuff work automatically/correctly for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) | 21:42:31 |
SomeoneSerge (utc+3) | * I think I'd vote pro propagation if we could say with some certainty, that that is the only way to guarantee correctness for users of libcudart_static and of cmake's CUDA::cuda_driver (just because supporting that scope sounds doable) | 21:42:47 |
SomeoneSerge (utc+3) | hexa (UTC+1): I've been playing with (s)ccache lately and I'm almost amazed we're not using it more widely when building derivations for tests rather for direct consumption | 21:45:20 |
SomeoneSerge (utc+3) | * hexa (UTC+1): I've been playing with (s)ccache lately and I'm almost amazed we're not using it more widely when building derivations for tests rather than for direct consumption | 21:45:32 |
SomeoneSerge (utc+3) |
Which I'm guessing based on the fact that the infrastructure for this is rather lacking | 21:46:18 |
hexa (UTC+1) | where would results be stored? | 21:46:47 |
hexa (UTC+1) | caching of intermediate build steps would also be super helpful 😭 | 21:48:49 |
SomeoneSerge (utc+3) | Same as usual, derivation outputs would be stored in the nix store, they'd never be confused with the "pure" ones because they'd have different hashes. The (s)ccache directory would have to be set up on each builder | 21:48:52 |
SomeoneSerge (utc+3) | In reply to @hexa:lossy.networkYes but that's a much bigger refactor | 21:49:04 |
SomeoneSerge (utc+3) | Also I feel like when we do that Nixpkgs will effectively depend on the host having a CoW file system because the alternative sounds too IO intensive | 21:50:21 |
20 Oct 2024 | ||
sielicki | In reply to @ss:someonex.net Not specific to nixos, but just a rant from me: there's been a pretty large push around the cuda world for everyone to move to static libcudart... largely because with cuda 12 they introduced the minor version compatibility and "cuda enhanced compatibility" guarantees, and there's a lot of public statements (on github, etc.) from nvidia that suggests this is the safest way to distribute packages. All of this is really complicated and I don't fault projects for moving forward under this guidance, but i'm pretty confident that this does not cover all cases and you do still need to think about this stuff. One example of where you still need to think about it: a lot of code uses the runtime API to resolve the driver API (through This is a really easy way to run afoul of the cuda version mixing guidelines, and I feel like it's pretty underdiscussed and underdocumented. Those version mixing guidelines are still relevant, dammit! It's not magic! | 03:10:26 |
sielicki | In reply to @ss:someonex.net* Not specific to nixos, but just a rant from me: there's been a pretty large push around the cuda world for everyone to move to static libcudart... largely because with cuda 12 they introduced the minor version compatibility and "cuda enhanced compatibility" guarantees, and there's a lot of public statements (on github, etc.) from nvidia that suggests this is the safest way to distribute packages. All of this is really complicated and I don't fault projects for moving forward under this guidance, but i'm pretty confident that this does not cover all cases and you do still need to think about this stuff. One example of where you still need to think about it: a lot of code uses the runtime API to resolve the driver API (through This is a really easy way to run afoul of the cuda version mixing guidelines, and I feel like it's pretty underdiscussed and underdocumented. Those version mixing guidelines are still important, minor version compatibility does not save you, it's not the case that if they all start with "12" you don't have to think about it anymore. | 03:12:17 |
sielicki | Don't get me started on pypi wheels, and the nuance between RPATH and RUNPATH, and so on | 03:13:08 |
connor (he/him) (UTC-7) | In reply to @ss:someonex.netMy favorite functionality autoPatchelfHook has is that it will error on unresolved dependencies — I could live without the actual patching, I suppose, but I really like using it to check that all the libraries I need are in scope. Any ideas if such functionality already exists in Nixpkgs or would be a useful check? | 07:30:53 |
alex_nordin joined the room. | 18:27:40 |