NixOS CUDA | 324 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 64 Servers |
| Sender | Message | Time |
|---|---|---|
| 11 Feb 2023 | ||
In reply to @FRidh:matrix.orgAre you familiar with Guix's "grafts"? I just overheard about them, they seem to aim at solving a similar issue | 13:27:13 | |
| Yeah, I think it would have just worked if I was using docker or podman. Really, podman was pretty close, it is just that containerd is using cgroupsv2 and the podman config.toml has no-cgroups set to true. | 13:30:13 | |
| But alas. I am working with the kubernetes config in nixpkgs, which is using containerd. | 13:30:53 | |
| Yes, I think this was based on https://github.com/NixOS/rfcs/pull/3 | 15:33:45 | |
| or some earlier proposal from nbp | 15:34:38 | |
| Would anyone have the bandwidth to review https://github.com/NixOS/nixpkgs/pull/215552 and https://github.com/NixOS/nixpkgs/pull/215549 (which depends on the previous one)? It doesn't look to me like the @NixOS/cuda-maintainers tag is working unfortunately and I wasn't sure who to tag. | 19:18:02 | |
| connor (he/him): Looks like the ping worked. Would also help if you indicated that you tested it, as the template asks you to do... | 19:19:14 | |
| 13 Feb 2023 | ||
In reply to @FRidh:matrix.org The price of a particular kind of composability, more like it š¤ If we gave up "being able to deploy multiple versions of the same library" in favour of invalidating the conflicting combinations of libraries, we could have a "pure" system for quickly generating arbitrary FHS trees. We could also rely on | 15:46:08 | |
| 15 Feb 2023 | ||
It looks like CUDNN 8.8.0 is available as redistributables (https://developer.download.nvidia.com/compute/redist/cudnn/v8.8.0/local_installers/11.8/) but they're no longer providing -archive.tar.xz files. Because I'm trying to update nixpkgs with the new version, should I look into unpacking an RPM or DEB file? | 21:07:38 | |
| 16 Feb 2023 | ||
In reply to @connorbaker:matrix.orgposted this at the conda-forge feedstock repo. They typically have some contact with NVIDIA https://github.com/conda-forge/cudnn-feedstock/issues/54#issuecomment-1432630957 | 07:18:10 | |
| 17 Feb 2023 | ||
In reply to @connorbaker:matrix.orgughh sounds like a bug | 16:46:55 | |
| https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/ I see the tarballs here ... unless misunderstandingĀ | 16:49:47 | |
| the cuda11 one is meant for all of cuda 11.x | 16:51:05 | |
| Did they... move the downloads? | 16:52:45 | |
| Yes, this was done for consistency. Ā https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#tarball-and-zip-archive-deliverables | 17:03:43 | |
| 20 Feb 2023 | ||
| How do you all handle PRs which have dependencies on other PRs? For context, this is related to a PR I made here (https://github.com/NixOS/nixpkgs/pull/215229) which has spiraled into a grab-bag of changes. As an example, if you have a PR
The closest thing I think I've seen before is PRs against | 15:14:57 | |
Additionally, when there are no maintainers and a package is failing (because of an unrelated issue) on master, who would you recommend tagging for review? For example, caffe: https://github.com/NixOS/nixpkgs/pull/217330 | 17:21:18 | |
In reply to @connorbaker:matrix.orgI would tag the people who have contributed to the file in my pr for review. If my pr gets approvels, I would tag currently active committers (people who just merged some prs) for merging. | 17:38:27 | |
In reply to @connorbaker:matrix.orgThat's always tough. I think your approach here, all in one PR but with clear commits is fine. The thing is, there are just not that many contributors in Nixpkgs to CUDA and it's quite a big PR so it takes some effort to review. | 18:47:27 | |
| https://github.com/NixOS/nixpkgs/pull/217322 is closer to what I envision doing once I've further split apart that large PR. Does that seem okay? Is there an automated tool I should be using to do something similar to this? | 19:03:27 | |
Thank you all for the work you do maintaining the CUDA-accelerated packages. Building jaxlib and tensorflowWithCuda repeatedly is awful. | 21:45:11 | |
| 21 Feb 2023 | ||
| https://github.com/NixOS/nixpkgs/pull/217497 | 16:55:31 | |
| Has anyone used or set up CCACHE for any of the CUDA derivations? I know they take a while to build and I'm curious what's been done to try to reduce build times | 21:07:03 | |
| Hi! Thank you for investing your time and work in this right now! | 21:20:45 | |
Interesting. I just looked up the ccache nixos wiki page, it suggests one can just drop in something called ccacheStdenv | 21:21:46 | |
Do you know if packages built that way would work as substitutes for normal stdenv ones? | 21:22:59 | |
| Unfortunately I think it would be a different derivation :l | 21:35:02 | |
| so, no magic( | 21:57:53 | |
| From a conversation I had, it seems like it's intended more for use as a dev-shell than for the end derivation | 23:00:04 | |
| 22 Feb 2023 | ||
| so, I have an application that wants tflite_runtime | 00:17:18 | |