| 14 Nov 2024 |
SomeoneSerge (back on matrix) | i haven't even had time to review zeuner's tensorflow prs... | 16:45:41 |
SomeoneSerge (back on matrix) | * i haven't even had time to review zteuner's tensorflow prs... | 16:45:52 |
SomeoneSerge (back on matrix) | * i haven't even had time to review zeuner's tensorflow prs... | 16:46:03 |
connor (burnt/out) (UTC-8) | In reply to @aliarokapis:matrix.org hi all! is https://docs.nvidia.com/vpi/2.0/index.html packaged anywhere? as far as I can tell, it's only available via debian installers. I'd take a look at https://github.com/anduril/jetpack-nixos since they have some tooling set up to repackage stuff like that (they also already have VPI iirc, but maybe not a new version) | 17:40:58 |
| 15 Nov 2024 |
Alexandros Liarokapis | I wish I could run nixos modules inside a light container or something on as non-nixos machine. Waiting for systemd-nspawn to make this easier. | 09:46:58 |
@adam:robins.wtf | What’s a light container? | 12:46:54 |
Alexandros Liarokapis | Nothing I mean just a container, compared to a full blown vm. | 12:49:52 |
hexa | cool, we'll also break numba | 18:48:28 |
@adam:robins.wtf | Alexandros Liarokapis: i do this with incus. i'd probably call it a fat container, but it's a full nixos env inside a container. the downside/upside is it's separately managed from the host | 22:00:55 |
| 16 Nov 2024 |
Alexandros Liarokapis | This is very interesting. I recall a redhat talk modifying LXD to achieve this and that is shrn I started looking into it | 08:05:21 |
Alexandros Liarokapis | Got any respurces I can look into? | 08:05:30 |
Alexandros Liarokapis | * This is very interesting. I recall a redhat talk modifying LXD to achieve this and that is when I started looking into it | 08:05:38 |
Alexandros Liarokapis | * Got any resources I can look into? | 08:06:15 |
Alexandros Liarokapis | Actially I think the wiki page has enough info to get me started | 08:06:31 |
Alexandros Liarokapis | * Actually I think the wiki page has enough info to get me started | 08:06:41 |
Alexandros Liarokapis | .. or not, it is mainly nixos based. | 08:15:50 |
Alexandros Liarokapis | i guess I may as well try it | 08:16:07 |
hexa |
error: tensorflow-gpu-2.13.0 not supported for interpreter python3.12
| 20:45:57 |
hexa | the sound of nixos 24.05 hits hard | 20:46:03 |
hexa | * error: tensorflow-gpu-2.13.0 not supported for interpreter python3.12
| 20:46:08 |
hexa | * error: tensorflow-gpu-2.13.0 not supported for interpreter python3.12
| 20:46:12 |
| 17 Nov 2024 |
Gaétan Lepage | Yes... Let's hope zeuner finds the time to end the TF bump... | 10:38:39 |
| 18 Nov 2024 |
hexa | wyoming-faster-whisper[4505]: File "/nix/store/dfp38l0dy3n97wvrgz5i62mwvsmshd3n-python3.12-faster-whisper-unstable-2024-07-26/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 145, in __init__
wyoming-faster-whisper[4505]: self.model = ctranslate2.models.Whisper(
wyoming-faster-whisper[4505]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-faster-whisper[4505]: RuntimeError: CUDA failed with error unknown error
systemd[1]: wyoming-faster-whisper-medium-en.service: Main process exited, code=exited, status=1/FAILURE
| 02:09:21 |
hexa | also loving unknown error errors | 02:09:26 |
hexa | wyoming-faster-whisper[4745]: File "/nix/store/dfp38l0dy3n97wvrgz5i62mwvsmshd3n-python3.12-faster-whisper-unstable-2024-07-26/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 145, in __init__
wyoming-faster-whisper[4745]: self.model = ctranslate2.models.Whisper(
wyoming-faster-whisper[4745]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-faster-whisper[4745]: RuntimeError: CUDA failed with error no CUDA-capable device is detected
| 02:10:44 |
hexa | baby steps | 02:10:46 |
hexa | I can confirm the card is still seated correctly 😄 | 02:10:58 |
hexa | hardening at work | 02:18:46 |
connor (burnt/out) (UTC-8) | Ugh I don’t like computers | 05:10:46 |
connor (burnt/out) (UTC-8) | Anyway in the interest of splitting my attention ever more thinly I decided to start trying to work on some approach toward evaluation of derivations and building them
The idea being to have
- a service which is given a flake ref and an attribute path and efficiently produces a list of attribute paths to derivations exiting under the given attribute path and stores the eval time somewhere
- a service which is given a flake ref and an attribute path to a derivation and produces the JSON representation of the closure of derivations required to realize the derivation, again storing eval time somewhere
- a service which functions as a job scheduler, using historical data about costs (space, time, memory, CPU usage, etc.) and information about locality (existing store paths on different builders) to realize a derivation, which is updated upon realization of a derivation
| 05:18:41 |