!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

289 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
16 Dec 2024
@matthewcroughan:defenestrate.itmatthewcroughanpreviously what my flake was doing was far weirder14:49:07
@matthewcroughan:defenestrate.itmatthewcroughanhttps://github.com/nixified-ai/flake/blob/master/projects/invokeai/default.nix#L66-L9614:49:32
@matthewcroughan:defenestrate.itmatthewcroughan previously it was defining functions that were able to create variants of packages without setting rocmSupport or cudaSupport 14:49:51
@matthewcroughan:defenestrate.itmatthewcroughanJust terrible14:50:00
@matthewcroughan:defenestrate.itmatthewcroughan Besides, the modules the flake will export, won't interact with the comfyui-nvidia or comfyui-amd attrs, this is just for people who want to try it with nix run 14:50:42
@matthewcroughan:defenestrate.itmatthewcroughan In a system using the nixosModules, the overlay will be applied, which strictly ignores the packages attr of the flake 14:51:05
@matthewcroughan:defenestrate.itmatthewcroughanthe packages attr of the flake is just there for people wanting to use things in a non-nixos context really14:53:53
@ss:someonex.netSomeoneSerge (back on matrix) They're just completely separate.
I guess there are mappings between subsets of the frameworks, as evidenced by ZLUDA, hipify, and https://docs.scale-lang.com.
I suppose one could say that ZLUDA is a sort of a runtime proxy, although the multi-versioning bit is still missing.
14:55:59
@matthewcroughan:defenestrate.itmatthewcroughan Interestingly in the case of comfyui, I didn't need to add any rocm specific stuff 15:33:35
@matthewcroughan:defenestrate.itmatthewcroughan * Interestingly in the case of comfyui, I didn't need to add any rocm or cuda specific stuff 15:33:37
@matthewcroughan:defenestrate.itmatthewcroughanthat's all in the deps 15:33:40
@matthewcroughan:defenestrate.itmatthewcroughanSo for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which isgreat15:34:35
@matthewcroughan:defenestrate.itmatthewcroughan * So for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which is great15:34:37
17 Dec 2024
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) just left some comments, looks good!
Since you're the first person I'm aware of other than myself to use my cuda-packages repo, I'd love any thoughts you had on the user experience... especially given I've not documented anything yet.
07:34:41
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) My understanding was that changes which introduce new functionality or information to ubiquitous components in Nixpkgs should/need to go through the RFC process because people can/do expect stability around those interfaces and so the review process helps find and fix issues with designs before they're implemented.
If there are actual guidelines for changes that would require an RFC (I've not searched hard for them) I'd like to see them so I'm at least aware of them lol
07:37:58
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)For awareness, I tagged this issue as CUDA related so it should be on our project board: https://github.com/NixOS/nixpkgs/issues/36526207:39:57
18 Dec 2024
@dmiskovic:matrix.org@dmiskovic:matrix.org joined the room.19:38:04
19 Dec 2024
@hexa:lossy.networkhexahttps://www.cnx-software.com/2024/12/18/249-nvidia-jetson-orin-nano-super-developer-kit-targets-generative-ai-applications-at-the-edge/17:22:18
@matthewcroughan:defenestrate.itmatthewcroughan Saw this, they have a 16G version available too 22:20:04
@hexa:lossy.networkhexamuch more interesting, agreed22:20:13
@matthewcroughan:defenestrate.itmatthewcroughanIs this a "unified memory architecture" too?22:20:28
@hexa:lossy.networkhexastill hinges on the firmware and mainline support22:20:31
@hexa:lossy.networkhexano idea22:20:33
@matthewcroughan:defenestrate.itmatthewcroughan16G for the OS and GPU and CPU22:20:40
@matthewcroughan:defenestrate.itmatthewcroughanWould have to get my hands on it to really know22:20:47
@matthewcroughan:defenestrate.itmatthewcroughanthe ML stuff I run needs at least 16G of memory, and if this is a GPU that only has access to 4G of that memory, then it's kinda useless :P22:21:05
@matthewcroughan:defenestrate.itmatthewcroughanI doubt it!22:21:13
@matthewcroughan:defenestrate.itmatthewcroughanBut that price point makes me wonder 22:21:18
@matthewcroughan:defenestrate.itmatthewcroughanI can't wait to roam around events with a stable diffusion hat running one of these things off battery 22:21:44
@ss:someonex.netSomeoneSerge (back on matrix) I thought they all are 22:57:55

Show newer messages


Back to Room ListRoom Version: 9