!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

300 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda59 Servers

Load older messages


SenderMessageTime
3 Mar 2026
@caniko:matrix.orgcanikoany chance to build gimp and handbrake?19:53:02
5 Mar 2026
@kaya:catnip.eekaya 𖤐 Not sure if it's been mentioned here before but, for anyone affected by flash-attn builds OOM-ing. I noticed an upstream patch that tries to counter it https://github.com/Dao-AILab/flash-attention/pull/2079
Might be possible to apply it to the nix package 🤔
13:26:33
@kaya:catnip.eekaya 𖤐 * 13:26:58
@sporeray:matrix.orgRobbie Buxton Omg the bane of my existence 16:39:35
@sporeray:matrix.orgRobbie Buxton That has oomed on an ungodly amount of RAM 16:40:08
@sporeray:matrix.orgRobbie Buxton Nice to see they are trying to fix 16:40:26
6 Mar 2026
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I found zram gave an amazing compression ratio (I think the data being allocated by NVCC was all zeros) so even though it allocated upwards of .25TB of RAM I didn’t need to reduce the number of jobs05:13:31
@glepage:matrix.orgGaétan LepageI enabled this on our builders.10:18:40
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Yay talking at state of the union is over, sorry I asked for packages we test specifically and then mentioned none of them18:59:31
@h4k:matrix.orgmike hi all 19:02:58
@h4k:matrix.orgmike any guide for using nix on ubuntu for cuda torch? 19:03:27
@h4k:matrix.orgmikei am basically having out of memory compiling it all on my machine19:11:16
@h4k:matrix.orgmikeok i got it working will document when i ran all tests.19:19:50
@h4k:matrix.orgmike

ATTEMPTS.md - Chronicles all 8 attempts with re-evaluation using current knowledge:

  1. Nixpkgs torch (no CUDA)
  2. Build from source (OOM)
  3. Nix Python + pip (glibc conflicts)
  4. System Python + pip (works but not reproducible)
  5. fetchurl wheels (incomplete)
  6. Copy venv (incomplete) 7. buildPythonPackage test (learning)
  7. Hybrid solution (SUCCESS)

EXPLANATION.md - Explains WHY the solution works:

  • The glibc problem and how we solved it
  • Why Nix Python + pip wheels is the right approach
  • How makeLibraryPath simplifies library management
  • The trade-off between purity and practicality
19:20:23
@glepage:matrix.orgGaétan Lepage Well, if you are on Ubuntu, and using a nix shell for python development, just use uv (either through uv2nix or directly) 19:44:51
@glepage:matrix.orgGaétan Lepage Here is an example of a flake.nix which relies on uv for the python stuff: https://github.com/GaetanLepage/acoustix/blob/master/flake.nix 19:45:36
@h4k:matrix.orgmike https://github.com/SPUTNIKAI/sovereign-lila-e8/pull/4 this is what i ahve running let me check your code 19:56:45
@ctheune:matrix.flyingcircus.ioTheuni changed their display name from Christian Theune to Theuni.19:59:09
7 Mar 2026
@skainswo:matrix.orgSamuel Ainsworth

Hi folks, I've been working on compiling XLA in nix with CUDA support, but I'm running into this issue of the current nixpkgs glibc containing symbols (incl. cospif, rsqrtf, sinpi, cospi, rsqrt) that conflict with CUDA defined symbols:

glibc 2.42 (via __MATHCALL → __MATHDECL_1_IMPL):
extern float sinpif(float __x) noexcept(true); // __THROW → noexcept(true) in C++

CUDA (crt/math_functions.h):
extern host device float sinpif(float x); // no noexcept

has anyone else encountered this? if so how did you handle it?

05:46:03
@skainswo:matrix.orgSamuel Ainsworthapparently CUDA does not support these glibc versions (https://forums.developer.nvidia.com/t/error-exception-specification-is-incompatible-for-cospi-sinpi-cospif-sinpif-with-glibc-2-41/323591/2) but nixpkgs master is already on glibc 2.42. how do we reconcile this?05:48:10
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Ugh I thought I imagined this ughhhhhhhhhhhhh06:52:02
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Patch NVIDIA’s stuff?06:54:24
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Wait no this feels too familiar06:54:34
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)https://github.com/NixOS/nixpkgs/blob/3bb5f20c47dcfcab9acb3be810f42ca1261b49e2/pkgs/development/cuda-modules/packages/cuda_nvcc.nix#L16706:55:00
@glepage:matrix.orgGaétan LepageYep, I'm very proud of this. I will not take any additional questions.10:47:37
@glepage:matrix.orgGaétan Lepage

This PR was harder to finish than I expected. It's now ready and fixes a bunch of cudaSupport package builds.
https://github.com/NixOS/nixpkgs/pull/495151

(waiting for reviews)

10:49:39
@skainswo:matrix.orgSamuel Ainsworthooh, thanks! so far i've been trying to use clang as the host compiler since that's what xla says they support and iirc i got errors with gcc in the cpu-only build. so maybe i'm getting errors bc of mixing in clang? is clang as host compiler a supported combo?12:59:03
@glepage:matrix.orgGaétan LepageGCC is definitely the default in nixpkgs for linux. I'd try to stick to that as much as possible.13:06:32
@skainswo:matrix.orgSamuel AinsworthOk Roger that13:08:35
@skainswo:matrix.orgSamuel Ainsworthok iiuc xla uses a "cuda_clang" such that clang compiles cuda code directly, not nvcc18:04:35

Show newer messages


Back to Room ListRoom Version: 9