!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

289 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
3 Jul 2024
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @hexa:lossy.network
faissWithCuda pls 😄
Why not just https://github.com/NixOS/nixpkgs/pull/324379/files#diff-b3a88f86f137f8870849673fb9b06582cb73937114ee34a61ae5604e259829a5R37
17:13:53
@ss:someonex.netSomeoneSerge (back on matrix) Jonas Chevalier while at it, nobody is building import <nixpkgs> { config.rocmSupport = true; } either, and that one is free 17:14:56
@ss:someonex.netSomeoneSerge (back on matrix)The only reason not to build that with the NixOS Hydra is... to save resources17:16:20
@hexa:lossy.networkhexa (UTC+1)not sure how many jobs that will generate 17:17:40
@ss:someonex.netSomeoneSerge (back on matrix)Me neither 🙃17:21:54
@ss:someonex.netSomeoneSerge (back on matrix)

error: blackmagic-desktop-video has been due to being unmaintained

it has been and now it isn't

17:55:39
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @hexa:lossy.network
not sure how many jobs that will generate
❯ nix-eval-jobs --expr 'import ./pkgs/top-level/release-cuda.nix { }' --force-recurse | wc -l
...
138452

(not counting eval errors)

19:11:06
@hexa:lossy.networkhexa (UTC+1)so all of them19:13:28
@hexa:lossy.networkhexa (UTC+1)if there was a cache behind nix-community hydra, than you'd be mirroring cache.nixos.org effecitvely19:13:45
@hexa:lossy.networkhexa (UTC+1) * if there was a cache behind nix-community hydra, than you'd be mirroring cache.nixos.org effectively19:13:47
@ss:someonex.netSomeoneSerge (back on matrix)Yeah... Ideally we'd have a solution that evaluates the full DAGs for vanilla and cuda nixpkgs, starts building cuda from the leaves (ehhh, the roots), and always suspends the build if it hash matches the vanilla hash19:23:45
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Is there an open collective for the community hydra instance19:24:40
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)If they’re going to be building CUDA packages I definitely want to contribute lol19:25:56
@matthewcroughan:defenestrate.itmatthewcroughanHow can onnxruntime use more than 64G of memory with 12 cores22:28:09
@matthewcroughan:defenestrate.itmatthewcroughanugh22:28:09
@matthewcroughan:defenestrate.itmatthewcroughanoh, maybe because I lack https://github.com/NixOS/nixpkgs/blob/nixos-unstable/pkgs/development/libraries/onnxruntime/default.nix#L19122:30:58
@matthewcroughan:defenestrate.itmatthewcroughanI think it reaches a part of the build where it runs away22:31:15
@matthewcroughan:defenestrate.itmatthewcroughan * I think it reaches a part of the build where it runs away, without this22:31:20
@matthewcroughan:defenestrate.itmatthewcroughanhttps://github.com/NixOS/nixpkgs/pull/30406922:32:05
@matthewcroughan:defenestrate.itmatthewcroughanthis22:32:06
4 Jul 2024
@zimbatm:numtide.comJonas Chevalier
In reply to @connorbaker:matrix.org
Is there an open collective for the community hydra instance

yes, we spend it all on hardware: https://opencollective.com/nix-community

we could also explore hardware donation if you want to bring esoteric hardware to the build farm.

08:06:20
@zimbatm:numtide.comJonas Chevalier
In reply to @ss:someonex.net
H'm so in hydra you "create a jobset" somewhere like in a web ui before you merge the terraform configs? Or the tf config is the whole thing but you deployed it manually?

The jobset is created with Terraform with https://github.com/nix-community/infra/blob/master/terraform/hydra-projects.tf

This works well because Hydra is a mix of stateful stuff so having a convergence engine is quite nice there.

08:07:37
@zimbatm:numtide.comJonas Chevalier
In reply to @ss:someonex.net
Jonas Chevalier while at it, nobody is building import <nixpkgs> { config.rocmSupport = true; } either, and that one is free
Ok, let's do that once CUDA is stable. Building unfreeRedistributable could also be nice.
08:08:18
@zimbatm:numtide.comJonas Chevalier
In reply to @ss:someonex.net
Why not just https://github.com/NixOS/nixpkgs/pull/324379/files#diff-b3a88f86f137f8870849673fb9b06582cb73937114ee34a61ae5604e259829a5R37
I think this is going to break our instance. The main hydra needs 128GB of RAM to evaluate all of nixpkgs. If you want to keep the list up to date, it's probably better to invest in a script.
08:11:25
@zimbatm:numtide.comJonas Chevalier
In reply to @ss:someonex.net
Why not just https://github.com/NixOS/nixpkgs/pull/324379/files#diff-b3a88f86f137f8870849673fb9b06582cb73937114ee34a61ae5604e259829a5R37
* I think this is going to break our instance. The main hydra needs 128GB of RAM to evaluate all of nixpkgs. If you want to keep the list up to date, it's probably better to invest in a script (that you run locally and commit the result).
08:11:45
@philiptaron:matrix.orgPhilip Taron (UTC-8) left the room.15:46:31
5 Jul 2024
@zimbatm:numtide.comJonas ChevalierI don't know if this has been discussed before: did you look at aligning the package versions with some upstream? For example Nvidia are releasing the nvcr.io Docker images. If we could provide the same versions as a package sets, it would reduce the switching cost for those users.06:10:17
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @zimbatm:numtide.com
I don't know if this has been discussed before: did you look at aligning the package versions with some upstream?

For example Nvidia are releasing the nvcr.io Docker images. If we could provide the same versions as a package sets, it would reduce the switching cost for those users.
Well if we're talking about cudaPackages, they are aligned with the manifests that upstream advertises
07:20:34
@ss:someonex.netSomeoneSerge (back on matrix)

it would reduce the switching cost for those users.

Do you have a specific example in mind?

07:20:53
@ss:someonex.netSomeoneSerge (back on matrix) *

it would reduce the switching cost for those users.

Do you have a specific user story in mind?

07:21:02

Show newer messages


Back to Room ListRoom Version: 9