| 3 Mar 2023 |
SomeoneSerge (back on matrix) | Another potential #FalsePositive: we rebuild scikitimage because it depends on numba, which changes with cudaSupport | 15:54:07 |
| 4 Mar 2023 |
SomeoneSerge (back on matrix) | Turned off the hercules builds. I finally skimmed through the part of the manual, where it says it supports onSchedule jobs, I'm hoping I can ditch github actions and bash scripts, and make a smarter schedule where I'd build e.g. 8.6 more frequently than the big default set of capabilities 🤔 | 00:50:21 |
| 5 Mar 2023 |
SomeoneSerge (back on matrix) | Yay, I'm beginning to understand how to use hercules "properly" | 14:18:12 |
SomeoneSerge (back on matrix) |  Download image.png | 14:18:13 |
SomeoneSerge (back on matrix) | Kevin Mittman: can't seem to find nvJPEG2k in the redist packages | 22:46:11 |
Kevin Mittman (EOY sleep) | IIRC the payload exists but the directory is not browse-able (yet). I'll check on it tomorrow | 22:48:57 |
SomeoneSerge (back on matrix) | I see. It's not in the manifests either | 22:50:05 |
SomeoneSerge (back on matrix) | Thx | 22:50:17 |
Kevin Mittman (EOY sleep) | If you are looking in cuda/redist it's not there because it's a separate product. Not to be confused with nvjpeg. | 22:58:12 |
SomeoneSerge (back on matrix) | I was looking for nvjpeg2k specifically | 22:58:52 |
Kevin Mittman (EOY sleep) | Similarly for libcusparse vs cuSPARSE Lt | 22:58:56 |
| 6 Mar 2023 |
connor (he/him) | Is there a board/project or other organizational tool used to track CUDA-related items specifically? | 18:01:39 |
SomeoneSerge (back on matrix) | Nope, and imo we've been needing one | 18:20:30 |
SomeoneSerge (back on matrix) | Would be nice if the github "team" had its own "github project" (that trello-like thing) | 18:20:58 |
SomeoneSerge (back on matrix) | connor (he/him): taking the magma fix into account, do you find you still need 8.0? | 19:06:49 |
connor (he/him) | I’ll need 8.0 in general when I’m training stuff on an A100, but outside that no I don’t | 19:12:03 |
hexa | https://github.com/NixOS/nixpkgs/pull/219104 | 19:57:34 |
hexa | would anyone wager a guess why our torch source build would be 10 times slower than torch-bin? | 19:58:02 |
SomeoneSerge (back on matrix) | Redacted or Malformed Event | 20:11:16 |
SomeoneSerge (back on matrix) | * Not sure I understand, 10 slower than building pytorch without nix, or 10 slower than unpacking a prebuilt wheel? | 20:11:45 |
SomeoneSerge (back on matrix) | * Not sure I understand, 10 slower than building pytorch without nix, or 10 slower than unpacking a prebuilt wheel? Oh, I read it | 20:11:52 |
SomeoneSerge (back on matrix) | connor (he/him): one more thing I find we miss very much is automated benchmarks, comparing us against nvidia containers and pypi wheels | 20:14:15 |
SomeoneSerge (back on matrix) | In reply to @hexa:lossy.network https://github.com/NixOS/nixpkgs/pull/219104 Outraging | 20:14:25 |
SomeoneSerge (back on matrix) | hexa: numba numpy compatibility PR got merged | 20:20:37 |
hexa | good | 20:20:55 |
hexa | now if they do a release we can drop the patch | 20:21:05 |
hexa | aha | 21:05:08 |
hexa | so torch-bin w/o cuda 2s | 21:05:16 |
hexa | torch-bin w cuda 34s | 21:05:26 |
hexa | makes sense | 21:05:29 |