| 18 Nov 2024 |
connor (burnt/out) (UTC-8) | Because why have one project when you can have many? | 05:18:55 |
connor (burnt/out) (UTC-8) | https://github.com/ConnorBaker/nix-eval-graph
And I’ve decided to write it in Rust, which I am self teaching.
And I’ll probably use a graph database, because why not.
And I’ll use NixOS tests for integration testing, because also why not. | 05:20:02 |
connor (burnt/out) (UTC-8) | All this is to say I am deeply irritated when I see my builders copying around gigantic CUDA libraries constantly. | 05:20:31 |
connor (burnt/out) (UTC-8) | Unrelated to closure woes, I tried to package https://github.com/NVIDIA/MatX and https://github.com/NVIDIA/nvbench and nearly pulled my hair out. If anyone has suggestions for doing so without creating a patched and vendored copy of https://github.com/rapidsai/rapids-cmake or writing my own CMake for everything, I’d love to hear! | 05:23:26 |
connor (burnt/out) (UTC-8) | Also, anyone know how the ROCm maintainers are doing? | 05:26:35 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org
Anyway in the interest of splitting my attention ever more thinly I decided to start trying to work on some approach toward evaluation of derivations and building them
The idea being to have
- a service which is given a flake ref and an attribute path and efficiently produces a list of attribute paths to derivations exiting under the given attribute path and stores the eval time somewhere
- a service which is given a flake ref and an attribute path to a derivation and produces the JSON representation of the closure of derivations required to realize the derivation, again storing eval time somewhere
- a service which functions as a job scheduler, using historical data about costs (space, time, memory, CPU usage, etc.) and information about locality (existing store paths on different builders) to realize a derivation, which is updated upon realization of a derivation
Awesome! I've been bracing myself to look into that too. What's your current idea regarding costs and locality? | 07:09:42 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org Unrelated to closure woes, I tried to package https://github.com/NVIDIA/MatX and https://github.com/NVIDIA/nvbench and nearly pulled my hair out. If anyone has suggestions for doing so without creating a patched and vendored copy of https://github.com/rapidsai/rapids-cmake or writing my own CMake for everything, I’d love to hear! we'd need to do that if were to package rapids itself too, wouldn't we? | 07:11:11 |
connor (burnt/out) (UTC-8) | In reply to @ss:someonex.net Awesome! I've been bracing myself to look into that too. What's your current idea regarding costs and locality? Currently I don't know how I'd even model it... but I've been told that job scheduling is a well-researched problem in HPC communities ;) I started to write something about how I think of high-level tradeoffs between choosing where to build to build moar fast, reduce the number of rebuilds (if they are at all permitted), reduce network traffic, etc. and then thought "well what if the machines aren't homogenous" and I've decided it's time for bed. | 08:40:34 |
connor (burnt/out) (UTC-8) | In reply to @ss:someonex.net we'd need to do that if were to package rapids itself too, wouldn't we? I have been avoiding rapids so hard lmao 🙅♂️ | 08:40:49 |
connor (burnt/out) (UTC-8) | Unrelated -- if anyone has experience with NixOS VM tests and getting multiple nodes to talk to each other, I'd appreciate pointers. ping can resolve hostnames but curl can't for some reason (https://github.com/ConnorBaker/nix-eval-graph/commit/c5a1e2268ead6ff6ffaab672762c1eedee53f403). | 08:43:02 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org Currently I don't know how I'd even model it... but I've been told that job scheduling is a well-researched problem in HPC communities ;) I started to write something about how I think of high-level tradeoffs between choosing where to build to build moar fast, reduce the number of rebuilds (if they are at all permitted), reduce network traffic, etc. and then thought "well what if the machines aren't homogenous" and I've decided it's time for bed. True. I'm still yet to read up on how SLURM and friends do this. Shameless plug: https://github.com/sinanmohd/evanix (slides) | 12:20:00 |
SomeoneSerge (back on matrix) | You should chat with picnoir too | 12:20:44 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org Unrelated -- if anyone has experience with NixOS VM tests and getting multiple nodes to talk to each other, I'd appreciate pointers. ping can resolve hostnames but curl can't for some reason (https://github.com/ConnorBaker/nix-eval-graph/commit/c5a1e2268ead6ff6ffaab672762c1eedee53f403). Should just work, what is the error? | 12:22:30 |
connor (burnt/out) (UTC-8) | In reply to @ss:someonex.net True. I'm still yet to read up on how SLURM and friends do this. Shameless plug: https://github.com/sinanmohd/evanix (slides) Woah! Thanks for the links, I wasn't aware of these | 20:17:47 |
| 19 Nov 2024 |
hexa | python-updates with numpy 2.1 has landed in staging | 00:31:36 |
hexa | sowwy | 00:31:40 |
connor (burnt/out) (UTC-8) | In reply to @ss:someonex.net Should just work, what is the error? Curl threw connection refused or something similar; I’ll try to get the log tomorrow | 06:34:11 |
| 20 Nov 2024 |
| Conroy joined the room. | 04:47:44 |
connor (burnt/out) (UTC-8) | I did not get a chance; rip | 07:22:37 |
| Daniel joined the room. | 18:53:01 |
| 22 Nov 2024 |
| deng23fdsafgea joined the room. | 06:27:37 |
| Morgan (@numinit) joined the room. | 17:52:10 |
| 24 Nov 2024 |
sielicki | https://negativo17.org/nvidia-driver/ pretty good read | 21:49:05 |
sielicki | most of this is stuff that nixos gets right, but it's a nice collection of gotchas and solutions | 22:01:49 |
sielicki | anyone have strong opinions on moving nccl and nccl-tests out of cudaModules? Rationale on moving them out: neither one is distributed as a part of the cuda toolkit and they release on an entirely separate cadence, so there's no real reason for it to be in there. It's no different than eg: torch in terms of the cuda dependency. | 22:16:05 |
SomeoneSerge (back on matrix) | In reply to @sielicki:matrix.org anyone have strong opinions on moving nccl and nccl-tests out of cudaModules? Rationale on moving them out: neither one is distributed as a part of the cuda toolkit and they release on an entirely separate cadence, so there's no real reason for it to be in there. It's no different than eg: torch in terms of the cuda dependency. iirc we put it in there because if you set tensorflow = ...callPackage ... { cudaPackages = cudaPackages_XX_y; } you'll need to also pass a compatible nccl | 22:17:33 |
SomeoneSerge (back on matrix) | so it's just easier to instantiate each cudaPackages variant with its own nccl and pass it along | 22:17:55 |
sielicki | I guess that's fair, and there is a pretty strong coupling of cuda versions and nccl versions... eg: https://github.com/pytorch/pytorch/pull/133593 has been stalled for some time due to nvidia dropping the pypi cu11 package for nccl, so there's reason to keep them consistent even if they technically release separately. | 22:20:12 |
SomeoneSerge (back on matrix) | In reply to @sielicki:matrix.org https://negativo17.org/nvidia-driver/ pretty good read Any highlights, what we might be missing? | 22:22:09 |
sielicki | honestly I am not sure there's anything, I just like the thought that went into it | 22:27:21 |