!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

282 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
10 Dec 2025
@arilotter:matrix.orgAri Lotterhm looks like vllm fails to build because outlines has a test that imports libcuda.so on collection..22:03:19
@ss:someonex.netSomeoneSerge (back on matrix)https://github.com/NixOS/nixpkgs/pull/465751#issuecomment-360411365223:30:45
@apyh:matrix.orgapyhhahaha you're already ahead of me!23:31:20
@ss:someonex.netSomeoneSerge (back on matrix)Gaetan has been on it23:32:00
@apyh:matrix.orgapyhyeah, seems like there's a clear path to patch / upstream a fix to llama cpp python to make it lazy23:33:22
@apyh:matrix.orgapyhbtw i posted logs about that torch nvrtc thing23:33:36
11 Dec 2025
@glepage:matrix.orgGaétan Lepage Yes, but it's not as straightforward as they initialize some top-level constants at module import time with the content of the loaded library...
I still think that it would be the best solution to this issue.
09:19:45
@youthlic:mozilla.orgyouthlic changed their profile picture.14:36:35
13 Dec 2025
@kaya:catnip.eekaya 𖤐 I recently found about a feature just for this: pkgsCuda
Could be the right thing for you.
23:06:15
14 Dec 2025
@tillerino:matrix.org@tillerino:matrix.org left the room.10:49:43
@suua:matrix.orgsuua joined the room.13:32:40
@ss:someonex.netSomeoneSerge (back on matrix)Not an official team stance, but personally I hope we deprecate it soon enough15:32:40
@ss:someonex.netSomeoneSerge (back on matrix)Yes!15:33:21
15 Dec 2025
@pdealbera:matrix.orgpdealbera joined the room.02:45:00
@pdealbera:matrix.orgpdealbera

Hi! I am encountering this issue when using the CUDA cache for NixOS, it probably an issue on my end but wanted to know if anybody encounter the same thing because it was working just hours ago:

warning: error: unable to download 'https://cache.nixos-cuda.org/mhf691zwwjrqi8b6an14pblyqbzwn1v2.narinfo': Could not connect to server (7) Failed to connect to cache.nixos-cuda.org port 443 after 6 ms: Could not connect to server; retrying in 258 ms

02:47:07
@hexa:lossy.networkhexa (UTC+1)looks up from here02:47:37
@hexa:lossy.networkhexa (UTC+1)might be a transient path issue02:47:46
@pdealbera:matrix.orgpdealberaDid you reproduced the same issue?02:49:57
@hexa:lossy.networkhexa (UTC+1)
❯ curl https://cache.nixos-cuda.org/mhf691zwwjrqi8b6an14pblyqbzwn1v2.narinfo
missed hash⏎
02:55:27
@pdealbera:matrix.orgpdealbera

Thanks! Not the same thing, I can't reach the host:

❯ curl https://cache.nixos-cuda.org/mhf691zwwjrqi8b6an14pblyqbzwn1v2.narinfo
curl: (7) Failed to connect to cache.nixos-cuda.org port 443 after 675 ms: Could not connect to server
02:59:52
@pdealbera:matrix.orgpdealberaBut that means its probably a thing on my end.03:00:06
@hexa:lossy.networkhexa (UTC+1)the server is hosted in helsinki at hetzner fwiw03:01:23
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Slightly off topic but for those of you who use Hydra or nix-eval-jobs with lots of eval time fetchers or substitution, you may be interested in some WIP I’ve been doing to improve that use case https://gist.github.com/ConnorBaker/9e31d3b08ff6d4ac841928412131fe1509:42:32
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Numbers from doing a shallow eval (not forcing recursion) of Haskell.nix’s hydraJobs which has a number of flake inputs (and I think also does IFD?)
Download Numbers from doing a shallow eval (not forcing recursion) of Haskell.nix’s hydraJobs which has a number of flake inputs (and I think also does IFD?)
09:46:39
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I’m also trying to look into using Intel VTune to get a better idea of Nix bottlenecks/areas for improvement
VTune is currently packaged in Nixpkgs through the Intel-oneapi stuff but I couldn’t get it working without using the latest version. I’ll probably try upstreaming the changes at some point unless someone beats me to it.
Download I’m also trying to look into using Intel VTune to get a better idea of Nix bottlenecks/areas for improvement VTune is currently packaged in Nixpkgs through the Intel-oneapi stuff but I couldn’t get it working without using the latest version. I’ll probably try upstreaming the changes at some point unless someone beats me to it.
09:48:44
@yorik.sar:matrix.orgyorik.sarDid you by any chance run a comparison for more common use-case of evaluating a sizeable NixOS config, for example? Just to see what those locks do to less parallel workload.10:53:30
@yorik.sar:matrix.orgyorik.sarI’m surprised to see parser there - how much code were you evaluating?10:54:09
@yorik.sar:matrix.orgyorik.sar I think I already saw some lock implementation in Nix code, probably better to reuse that one. Also, Nix code seems to prefer RAII (smth like { auto _thelock = lock.get(); … }) rather than passing continuation to a function (withLock(…)). 10:56:46
@yorik.sar:matrix.orgyorik.sar

I'd like to do further work to deduplicate queries for .narinfo and the like, since Nix already generates quite the network storm by firing them off in serial.
I wonder if Nix uses HTTP/2 there. I think with stream multiplexing, all requests could essentially fit in one pack of packets.

10:59:07
@yorik.sar:matrix.orgyorik.sar *

I'd like to do further work to deduplicate queries for .narinfo and the like, since Nix already generates quite the network storm by firing them off in serial.

I wonder if Nix uses HTTP/2 there. I think with stream multiplexing, all requests could essentially fit in one pack of packets.

10:59:14

There are no newer messages yet.


Back to Room ListRoom Version: 9