| 19 Oct 2025 |
connor (burnt/out) (UTC-8) | And if anyone wanted to run the demo, I've packaged it with a flake: https://github.com/ConnorBaker/ContinuousSR
Still need to download the model from their google drive (https://github.com/ConnorBaker/ContinuousSR?tab=readme-ov-file#pretrained-model)
$ nix develop --command -- python demo.py --input butterflyx4.png --model ContinuousSR.pth --scale 4,4 --output output.png
/nix/store/i8vsz78lc405s5rifmz3p1lpvzhh1x74-python3-3.13.7-env/lib/python3.13/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
/nix/store/i8vsz78lc405s5rifmz3p1lpvzhh1x74-python3-3.13.7-env/lib/python3.13/site-packages/torch/functional.py:554: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /build/pytorch/aten/src/ATen/native/TensorShape.cpp:4322.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/connorbaker/ContinuousSR/models/gaussian.py:245: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor).
get_xyz = torch.tensor(get_coord(lr_h * 2, lr_w * 2)).reshape(lr_h * 2, lr_w * 2, 2).cuda()
finished!
| 19:07:16 |
| 20 Oct 2025 |
BerriJ | This look super cool! Remings me of the new 100x digital zoom feature of the Pixel Phones. So nice to see something open for that :) | 06:51:27 |
connor (burnt/out) (UTC-8) | That uses something closer to multi-frame super-resolution (which is specifically what I’m interested in — I’d like to rewrite ContinuousSR to support that use case). Here’s a reproducer for an earlier version of the software Google used around the Pixel 3 era: https://github.com/Jamy-L/Handheld-Multi-Frame-Super-Resolution | 14:24:13 |
connor (burnt/out) (UTC-8) | This was also a great way to find out that on master the wrong version of cuDNN is selected when building for Jetson (we get the x86 binary) — that’s mostly why the flake is using my PR with CUDA 13 and the packaging refactor | 14:29:02 |
| 21 Oct 2025 |
connor (burnt/out) (UTC-8) | Started work on CUDA-legacy for everyone who needs support for older versions of CUDA https://github.com/nixos-cuda/cuda-legacy/pull/1 | 00:33:26 |
connor (burnt/out) (UTC-8) | There might be interesting stuff in the second commit if you’re unfamiliar with flake-part’s partitions functionality | 06:22:13 |
connor (burnt/out) (UTC-8) | SomeoneSerge (back on matrix)
what if 👉👈
you merged my CUDA 13 PR 🥺 | 06:23:32 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org SomeoneSerge (back on matrix)
what if 👉👈
you merged my CUDA 13 PR 🥺 'most there | 22:00:59 |
| 22 Oct 2025 |
Niclas Overby Ⓝ | Are there any good resources for getting CUDA projects, built with CUDA packages from Nixpkgs, running with libcuda.so provided by a non-NixOS host?
Can it be done with LD_LIBRARY_PATH or LD_PRELOAD? | 08:08:09 |
hacker1024 | nixGL has worked for me in the past | 08:56:54 |
hacker1024 | Technically that finds a kernel-compatible libcuda.so in Nixpkgs | 08:57:24 |
Robbie Buxton | In reply to @niclas:overby.me Are there any good resources for getting CUDA projects, built with CUDA packages from Nixpkgs, running with libcuda.so provided by a non-NixOS host? Can it be done with LD_LIBRARY_PATH or LD_PRELOAD? If you create a /run/opengl-driver/lib (it might be called something slightly different) folder and symlink all the cuda kernel mode drivers in there. It should work out of the box | 14:40:53 |
Robbie Buxton | * If you create a /run/opengl-driver/lib (it might be called something slightly different) folder and symlink all the cuda kernel mode drivers into there. It should work out of the box | 14:41:06 |
Robbie Buxton | You need to add this folder to the rpaths of those drivers too others they can’t find each other | 14:41:38 |
Robbie Buxton | I.e libcuda.so tries to load something else with nix linker | 14:42:05 |
Robbie Buxton | * I.e libcuda.so tries to load something else with the nix linker | 14:45:15 |
connor (burnt/out) (UTC-8) | Both nixGL and nixglhost should work I’ve also had success doing what Robbie outlined
I’ve also been able to export LD_LIBRARY_PATH and that’s worked as well | 15:02:28 |
Robbie Buxton | * You need to add this folder to the rpaths of those drivers too otherwise they can’t find each other | 15:13:26 |
| 23 Oct 2025 |
connor (burnt/out) (UTC-8) | Got the majority of redists in https://github.com/nixos-cuda/cuda-legacy/pull/1; still need to verify stuff builds and add more redists to the older package sets
Everything being as-is is nice, I don’t have to care nsight_systems uses an old version of Qt with known vulnerabilities | 01:28:19 |
hacker1024 | If anyone happens to be using datacenter drivers, be aware the the GSP firmware is not loading. This might lead to unexpected performance problems.
https://github.com/NixOS/nixpkgs/issues/454772 | 04:58:50 |
| prince213 joined the room. | 13:13:15 |
| 24 Oct 2025 |
Daniel Fahey | Anyone with a phat rig (16+ cores 100+ GB RAM) able to test building this fix? https://github.com/NixOS/nixpkgs/pull/455364 | 20:57:01 |
Gaétan Lepage | Yes | 22:26:21 |
Gaétan Lepage | python313Packages.vllm built successfully with cudaSupport! | 23:13:09 |
Gaétan Lepage | I started an extensive nixpkgs-review with cudaSupport = true but it will take a while to complete. | 23:15:33 |
| 25 Oct 2025 |
| @washort:greyface.org left the room. | 02:21:45 |
Daniel Fahey | Thanks! I think it's ready to merge then, I've marked the PR ready to review | 12:43:44 |
Gaétan Lepage | I was away from my computer, but I managed to run nixpkgs-review successfully. Good job Daniel Fahey!
I've seen that happysalade merged the PR. | 18:17:00 |
Daniel Fahey | No problem, thanks for running nixpkgs-review and good to know TorchRL and KServe are OK | 18:24:42 |
Daniel Fahey | btw I'm having a quick look into https://hydra.nixos-cuda.org/build/1784 | 18:29:42 |