| 28 Feb 2026 |
Gaétan Lepage | RE: vllm update to 0.16.0
Although vllm 0.16.0 was initially depending on torch==2.10.0, this changed.
Indeed, upstream force-pushed the tag to revert this change and go back to requiring torch==2.9.1.
I learnt about this by chance, discussing this wis a coworker. | 10:28:07 |
| 2 Mar 2026 |
| Kevin Mittman (jetlagged/UTC+8) changed their display name from Kevin Mittman (UTC+8) to Kevin Mittman (jetlagged/UTC+8). | 01:12:50 |
hexa (UTC+1) | @SomeoneSerge (back on matrix) https://hydra.nixos.org/build/323045198 👋 | 15:28:08 |
| Maren Horner joined the room. | 20:37:42 |
Gaétan Lepage | https://github.com/nixos-cuda/infra/commit/eeb5eb95d8eb0abba5fa50d14500c0d20a2e5d12
🥲 | 23:46:37 |
Lun | 😭 | 23:47:25 |
hexa (UTC+1) | Yeah, I'm afraid unless we don't GC harder this is going to be tought sell. | 23:51:53 |
hexa (UTC+1) | * Yeah, I'm afraid unless we GC harder this is going to be tought sell. | 23:51:57 |
hexa (UTC+1) | * Yeah, I'm afraid unless we GC harder this is going to be tough sell. | 23:52:02 |
| 3 Mar 2026 |
SomeoneSerge (matrix works sometimes) | =\ | 16:23:53 |
caniko | any chance to build gimp and handbrake? | 19:53:02 |
| 5 Mar 2026 |
kaya 𖤐 | Not sure if it's been mentioned here before but, for anyone affected by flash-attn builds OOM-ing. I noticed an upstream patch that tries to counter it https://github.com/Dao-AILab/flash-attention/pull/2079
Might be possible to apply it to the nix package 🤔 | 13:26:33 |
kaya 𖤐 | * | 13:26:58 |
Robbie Buxton | Omg the bane of my existence | 16:39:35 |
Robbie Buxton | That has oomed on an ungodly amount of RAM | 16:40:08 |
Robbie Buxton | Nice to see they are trying to fix | 16:40:26 |
| 6 Mar 2026 |
connor (burnt/out) (UTC-8) | I found zram gave an amazing compression ratio (I think the data being allocated by NVCC was all zeros) so even though it allocated upwards of .25TB of RAM I didn’t need to reduce the number of jobs | 05:13:31 |
Gaétan Lepage | I enabled this on our builders. | 10:18:40 |
connor (burnt/out) (UTC-8) | Yay talking at state of the union is over, sorry I asked for packages we test specifically and then mentioned none of them | 18:59:31 |
mike | hi all | 19:02:58 |
mike | any guide for using nix on ubuntu for cuda torch? | 19:03:27 |
mike | i am basically having out of memory compiling it all on my machine | 19:11:16 |
mike | ok i got it working will document when i ran all tests. | 19:19:50 |
mike | ATTEMPTS.md - Chronicles all 8 attempts with re-evaluation using current knowledge:
- Nixpkgs torch (no CUDA)
- Build from source (OOM)
- Nix Python + pip (glibc conflicts)
- System Python + pip (works but not reproducible)
- fetchurl wheels (incomplete)
- Copy venv (incomplete) 7. buildPythonPackage test (learning)
- Hybrid solution (SUCCESS)
EXPLANATION.md - Explains WHY the solution works:
- The glibc problem and how we solved it
- Why Nix Python + pip wheels is the right approach
- How makeLibraryPath simplifies library management
- The trade-off between purity and practicality
| 19:20:23 |
Gaétan Lepage | Well, if you are on Ubuntu, and using a nix shell for python development, just use uv (either through uv2nix or directly) | 19:44:51 |
Gaétan Lepage | Here is an example of a flake.nix which relies on uv for the python stuff: https://github.com/GaetanLepage/acoustix/blob/master/flake.nix | 19:45:36 |
mike | https://github.com/SPUTNIKAI/sovereign-lila-e8/pull/4 this is what i ahve running let me check your code | 19:56:45 |
| Theuni changed their display name from Christian Theune to Theuni. | 19:59:09 |
| 7 Mar 2026 |
Samuel Ainsworth | Hi folks, I've been working on compiling XLA in nix with CUDA support, but I'm running into this issue of the current nixpkgs glibc containing symbols (incl. cospif, rsqrtf, sinpi, cospi, rsqrt) that conflict with CUDA defined symbols:
glibc 2.42 (via __MATHCALL → __MATHDECL_1_IMPL): extern float sinpif(float __x) noexcept(true); // __THROW → noexcept(true) in C++
CUDA (crt/math_functions.h): extern host device float sinpif(float x); // no noexcept
has anyone else encountered this? if so how did you handle it?
| 05:46:03 |
Samuel Ainsworth | apparently CUDA does not support these glibc versions (https://forums.developer.nvidia.com/t/error-exception-specification-is-incompatible-for-cospi-sinpi-cospif-sinpif-with-glibc-2-41/323591/2) but nixpkgs master is already on glibc 2.42. how do we reconcile this? | 05:48:10 |