!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

294 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda60 Servers

Load older messages


SenderMessageTime
16 Dec 2024
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemI'm happy as long as I don't have to do weird things to achieve it14:48:58
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemand for me, this is not weird14:49:02
@matthewcroughan:defenestrate.itmatthewcroughan @fosdempreviously what my flake was doing was far weirder14:49:07
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemhttps://github.com/nixified-ai/flake/blob/master/projects/invokeai/default.nix#L66-L9614:49:32
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem previously it was defining functions that were able to create variants of packages without setting rocmSupport or cudaSupport 14:49:51
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemJust terrible14:50:00
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem Besides, the modules the flake will export, won't interact with the comfyui-nvidia or comfyui-amd attrs, this is just for people who want to try it with nix run 14:50:42
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem In a system using the nixosModules, the overlay will be applied, which strictly ignores the packages attr of the flake 14:51:05
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemthe packages attr of the flake is just there for people wanting to use things in a non-nixos context really14:53:53
@ss:someonex.netSomeoneSerge (back on matrix) They're just completely separate.
I guess there are mappings between subsets of the frameworks, as evidenced by ZLUDA, hipify, and https://docs.scale-lang.com.
I suppose one could say that ZLUDA is a sort of a runtime proxy, although the multi-versioning bit is still missing.
14:55:59
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem Interestingly in the case of comfyui, I didn't need to add any rocm specific stuff 15:33:35
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem * Interestingly in the case of comfyui, I didn't need to add any rocm or cuda specific stuff 15:33:37
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemthat's all in the deps 15:33:40
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemSo for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which isgreat15:34:35
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem * So for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which is great15:34:37
17 Dec 2024
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) just left some comments, looks good!
Since you're the first person I'm aware of other than myself to use my cuda-packages repo, I'd love any thoughts you had on the user experience... especially given I've not documented anything yet.
07:34:41
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) My understanding was that changes which introduce new functionality or information to ubiquitous components in Nixpkgs should/need to go through the RFC process because people can/do expect stability around those interfaces and so the review process helps find and fix issues with designs before they're implemented.
If there are actual guidelines for changes that would require an RFC (I've not searched hard for them) I'd like to see them so I'm at least aware of them lol
07:37:58
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)For awareness, I tagged this issue as CUDA related so it should be on our project board: https://github.com/NixOS/nixpkgs/issues/36526207:39:57
18 Dec 2024
@dmiskovic:matrix.org@dmiskovic:matrix.org joined the room.19:38:04
19 Dec 2024
@hexa:lossy.networkhexa (UTC+1)https://www.cnx-software.com/2024/12/18/249-nvidia-jetson-orin-nano-super-developer-kit-targets-generative-ai-applications-at-the-edge/17:22:18
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem Saw this, they have a 16G version available too 22:20:04
@hexa:lossy.networkhexa (UTC+1)much more interesting, agreed22:20:13
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemIs this a "unified memory architecture" too?22:20:28
@hexa:lossy.networkhexa (UTC+1)still hinges on the firmware and mainline support22:20:31
@hexa:lossy.networkhexa (UTC+1)no idea22:20:33
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem16G for the OS and GPU and CPU22:20:40
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemWould have to get my hands on it to really know22:20:47
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemthe ML stuff I run needs at least 16G of memory, and if this is a GPU that only has access to 4G of that memory, then it's kinda useless :P22:21:05
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemI doubt it!22:21:13
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemBut that price point makes me wonder 22:21:18

Show newer messages


Back to Room ListRoom Version: 9