| 17 Oct 2025 |
Gaétan Lepage | Thanks a lot for your work connor (he/him) (UTC-7)!!!
The PyTorch bump needs the library, not the python bindings. | 08:02:56 |
| 18 Oct 2025 |
| @ihar.hrachyshka:matrix.org left the room. | 00:13:10 |
connor (burnt/out) (UTC-8) | Various things which need to be fixed outside of the CUDA 13 PR:
- patchelf breaking Jeston binaries because of invalid binary modifications (there’s an open issue for this on Nixpkgs IIRC)
- packages with stubs should provide a setup hook which replaces the stub in the run path with wherever the library will be found at runtime (without duplicating run path entries — this is part of what started me making arrayUtilities setup hooks)
- cuda compat library precedence in run path entries is or can be lower than /run/opengl-driver/lib so it won’t be used; need to update hook to fix that (the other driving force behind arrayUtilities)
- Support for Clang as a host compiler for backendStdenv
- investigate cc-wrapper scripts to see if there’s anything relevant for NVCC (like random seed, or anything which enables deterministic compilation and linking)
There are others but I can’t remember them :/
| 06:38:17 |
| devusb joined the room. | 17:55:08 |
| 19 Oct 2025 |
| kaya 𖤐 changed their display name from kaya to kaya 𖤐. | 17:17:43 |
connor (burnt/out) (UTC-8) | https://github.com/peylnog/ContinuousSR is incredible | 18:48:03 |
connor (burnt/out) (UTC-8) | And if anyone wanted to run the demo, I've packaged it with a flake: https://github.com/ConnorBaker/ContinuousSR
Still need to download the model from their google drive (https://github.com/ConnorBaker/ContinuousSR?tab=readme-ov-file#pretrained-model)
$ nix develop --command -- python demo.py --input butterflyx4.png --model ContinuousSR.pth --scale 4,4 --output output.png
/nix/store/i8vsz78lc405s5rifmz3p1lpvzhh1x74-python3-3.13.7-env/lib/python3.13/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
/nix/store/i8vsz78lc405s5rifmz3p1lpvzhh1x74-python3-3.13.7-env/lib/python3.13/site-packages/torch/functional.py:554: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /build/pytorch/aten/src/ATen/native/TensorShape.cpp:4322.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/connorbaker/ContinuousSR/models/gaussian.py:245: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor).
get_xyz = torch.tensor(get_coord(lr_h * 2, lr_w * 2)).reshape(lr_h * 2, lr_w * 2, 2).cuda()
finished!
| 19:07:16 |
| 20 Oct 2025 |
BerriJ | This look super cool! Remings me of the new 100x digital zoom feature of the Pixel Phones. So nice to see something open for that :) | 06:51:27 |
connor (burnt/out) (UTC-8) | That uses something closer to multi-frame super-resolution (which is specifically what I’m interested in — I’d like to rewrite ContinuousSR to support that use case). Here’s a reproducer for an earlier version of the software Google used around the Pixel 3 era: https://github.com/Jamy-L/Handheld-Multi-Frame-Super-Resolution | 14:24:13 |
connor (burnt/out) (UTC-8) | This was also a great way to find out that on master the wrong version of cuDNN is selected when building for Jetson (we get the x86 binary) — that’s mostly why the flake is using my PR with CUDA 13 and the packaging refactor | 14:29:02 |
| 21 Oct 2025 |
connor (burnt/out) (UTC-8) | Started work on CUDA-legacy for everyone who needs support for older versions of CUDA https://github.com/nixos-cuda/cuda-legacy/pull/1 | 00:33:26 |
connor (burnt/out) (UTC-8) | There might be interesting stuff in the second commit if you’re unfamiliar with flake-part’s partitions functionality | 06:22:13 |
connor (burnt/out) (UTC-8) | SomeoneSerge (back on matrix)
what if 👉👈
you merged my CUDA 13 PR 🥺 | 06:23:32 |
SomeoneSerge (back on matrix) | In reply to @connorbaker:matrix.org SomeoneSerge (back on matrix)
what if 👉👈
you merged my CUDA 13 PR 🥺 'most there | 22:00:59 |