| 16 Dec 2024 |
matthewcroughan | previously what my flake was doing was far weirder | 14:49:07 |
matthewcroughan | https://github.com/nixified-ai/flake/blob/master/projects/invokeai/default.nix#L66-L96 | 14:49:32 |
matthewcroughan | previously it was defining functions that were able to create variants of packages without setting rocmSupport or cudaSupport | 14:49:51 |
matthewcroughan | Just terrible | 14:50:00 |
matthewcroughan | Besides, the modules the flake will export, won't interact with the comfyui-nvidia or comfyui-amd attrs, this is just for people who want to try it with nix run | 14:50:42 |
matthewcroughan | In a system using the nixosModules, the overlay will be applied, which strictly ignores the packages attr of the flake | 14:51:05 |
matthewcroughan | the packages attr of the flake is just there for people wanting to use things in a non-nixos context really | 14:53:53 |
SomeoneSerge (back on matrix) | They're just completely separate. I guess there are mappings between subsets of the frameworks, as evidenced by ZLUDA, hipify, and https://docs.scale-lang.com. I suppose one could say that ZLUDA is a sort of a runtime proxy, although the multi-versioning bit is still missing. | 14:55:59 |
matthewcroughan | Interestingly in the case of comfyui, I didn't need to add any rocm specific stuff | 15:33:35 |
matthewcroughan | * Interestingly in the case of comfyui, I didn't need to add any rocm or cuda specific stuff | 15:33:37 |
matthewcroughan | that's all in the deps | 15:33:40 |
matthewcroughan | So for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which isgreat | 15:34:35 |
matthewcroughan | * So for it, all I do is swap rocmSupport/cudaSupport in the nixpkgs instance, which is great | 15:34:37 |
| 17 Dec 2024 |
connor (burnt/out) (UTC-8) | just left some comments, looks good! Since you're the first person I'm aware of other than myself to use my cuda-packages repo, I'd love any thoughts you had on the user experience... especially given I've not documented anything yet. | 07:34:41 |
connor (burnt/out) (UTC-8) | My understanding was that changes which introduce new functionality or information to ubiquitous components in Nixpkgs should/need to go through the RFC process because people can/do expect stability around those interfaces and so the review process helps find and fix issues with designs before they're implemented. If there are actual guidelines for changes that would require an RFC (I've not searched hard for them) I'd like to see them so I'm at least aware of them lol | 07:37:58 |
connor (burnt/out) (UTC-8) | For awareness, I tagged this issue as CUDA related so it should be on our project board: https://github.com/NixOS/nixpkgs/issues/365262 | 07:39:57 |
| 18 Dec 2024 |
| @dmiskovic:matrix.org joined the room. | 19:38:04 |
| 19 Dec 2024 |
hexa | https://www.cnx-software.com/2024/12/18/249-nvidia-jetson-orin-nano-super-developer-kit-targets-generative-ai-applications-at-the-edge/ | 17:22:18 |
matthewcroughan | Saw this, they have a 16G version available too | 22:20:04 |
hexa | much more interesting, agreed | 22:20:13 |
matthewcroughan | Is this a "unified memory architecture" too? | 22:20:28 |
hexa | still hinges on the firmware and mainline support | 22:20:31 |
hexa | no idea | 22:20:33 |
matthewcroughan | 16G for the OS and GPU and CPU | 22:20:40 |
matthewcroughan | Would have to get my hands on it to really know | 22:20:47 |
matthewcroughan | the ML stuff I run needs at least 16G of memory, and if this is a GPU that only has access to 4G of that memory, then it's kinda useless :P | 22:21:05 |
matthewcroughan | I doubt it! | 22:21:13 |
matthewcroughan | But that price point makes me wonder | 22:21:18 |
matthewcroughan | I can't wait to roam around events with a stable diffusion hat running one of these things off battery | 22:21:44 |
SomeoneSerge (back on matrix) | I thought they all are | 22:57:55 |