| 31 Aug 2022 |
tpw_rules | well i mean github's nice "here's what's changed in the force push" button | 22:50:36 |
hexa (UTC+1) | huh, ok. Never used that | 22:51:21 |
| 1 Sep 2022 |
SomeoneSerge (back on matrix) | Ah, OK. I think some derivations have already been using that for a while | 11:19:55 |
SomeoneSerge (back on matrix) | But not e.g. pytorch, I think. They had some custom stuff and one would have to experiment with getting their way around it | 11:21:18 |
SomeoneSerge (back on matrix) | I'll update the scripts for nixpkgs-unfree cachix | 11:26:15 |
SomeoneSerge (back on matrix) | 🤔 did it even have to be cu113? | 11:26:48 |
SomeoneSerge (back on matrix) | RE: CUDA Arch List
- Early on, there has been a talk of maybe making a global
config.cudaArchList attribute, analogous to config.cudaSupport that all packages would inherit from. This is so we could switch all nixpkgs' cuda architectures at once
- This was before we had
cudaPackages_XX package sets and more or less arbitrary
- Now it might make sense to have global default lists per package sets (they support different ranges of architectures),
cudaPackages_XX.cudaArchList. They probably could be implemented as just min/max limits instead
- When drafting a PR on this, it would be nice to provide functions to convert back and forth between a "default" list format (e.g. cmake's
CUDA_ARCHITECTURES) and whatever other formats we frequently deal with
RE: Building for single target/building for widest compatibility
A thought that's been brought up here before, although now I'm skeptical of it, is that rather than building for all architectures at once we could cache many single-target builds. The idea is that when the end-user has to build something extra that hasn't been cached, they'd spend less compute
A counter-point would be that the user can override arch list just for their extra package instead
| 11:48:58 |
tpw_rules | if we build for all architectures why would the user have to build something extra? | 15:58:30 |
SomeoneSerge (back on matrix) | Because there will always be some packages we haven't built | 15:58:51 |
SomeoneSerge (back on matrix) | ...an opencv.override { enableSomething = true; } | 15:59:24 |
tpw_rules | oh i see | 15:59:36 |
tpw_rules | yeah in that case i think the user would also want to override for their arch list, if it works for their situation | 15:59:52 |
tpw_rules | * yeah in that case i think the user would also want to override for their arch list, if it is useful for their situation | 15:59:59 |
tpw_rules | a bunch of single target builds will have a LOT of overhead in terms of all the common cpu stuff | 16:00:17 |
linj | what is meaning of `${out,lib,bin}` in `find ${out,lib,bin} -type f` | 23:52:58 |
linj | my experimemt shows that `${out,lib,bin}` is that same as `${out}` | 23:53:40 |
linj | https://github.com/NixOS/nixpkgs/blob/f0daeb19cbf07d62376f755dac23d1d6d37eea93/pkgs/development/compilers/cudatoolkit/auto-add-opengl-runpath-hook.sh#L6 | 23:53:56 |
linj | * what is the meaning of `${out,lib,bin}` in `find ${out,lib,bin} -type f` | 23:55:11 |
| 2 Sep 2022 |
linj | In reply to @me:linj.tech https://github.com/NixOS/nixpkgs/blob/f0daeb19cbf07d62376f755dac23d1d6d37eea93/pkgs/development/compilers/cudatoolkit/auto-add-opengl-runpath-hook.sh#L6 is it a bug? FRidh | 14:08:42 |
linj | Do you mean for file in $(find $out/{lib,bin} -type f); do? | 14:28:47 |
| 4 Sep 2022 |
SomeoneSerge (back on matrix) | I'm not familiar with this bash syntax 🤣 But, FYI, I had a variation of this code meant to enumerate the outputs list: https://github.com/SomeoneSerge/nixpkgs/blob/c0fc4b2abc6322db67b4ad4aac1ddb8ddcccfc43/pkgs/development/compilers/cudatoolkit/auto-add-opengl-runpath-hook.sh
Shellcheck still won't like for ... in $(...) though
| 05:45:57 |
SomeoneSerge (back on matrix) | (it keeps amazing me, frankly speaking, that we still write the imperative parts in bash) | 05:48:07 |
SomeoneSerge (back on matrix) | Here's the original:
pkgs/os-specific/darwin/moltenvk/default.nix
156: for output in "''${!outputs[@]}"; do
| 05:51:31 |
FRidh | In reply to @me:linj.tech Do you mean for file in $(find $out/{lib,bin} -type f); do? these are all $outputs that may contain patchable objects | 09:45:59 |
FRidh | probably better to check outputs | 09:46:58 |
| 13 Sep 2022 |
| danielrf joined the room. | 21:37:24 |
SomeoneSerge (back on matrix) | Offtop, but I imagine this kind of an issue could also arise with a cuda application, and I wonder if you here might be the right people to ask: there's a program that is built with the gcc10StdenvCompat. It links dynamically against gcc10's libstdc++. At runtime the program needs to use opengl libraries (from the same nixpkgs revision), some of which link against gcc11's libstdc++. This results in a failure
I simply don't know what is the right way to deploy something like this (or if there is one) and it's kind of making me sad.
https://github.com/NixOS/nixpkgs/issues/190984
| 21:49:29 |
SomeoneSerge (back on matrix) | ...I'm not sure this could be built and run on any typical FHS distribution either (unless one actually uses a repo from X years ago where gcc10 is the default, and where both mesa and fluxus are in sync) | 21:52:10 |
| 14 Sep 2022 |
hexa (UTC+1) | preparing another python-updates run | 02:36:04 |
| 15 Sep 2022 |
| m_algery joined the room. | 12:33:41 |