| 18 May 2024 |
connor (burnt/out) (UTC-8) | However, I would strongly recommend writing a few scripts to provision an Azure instance instead. For example, Standard_HB120rs_v3 (https://learn.microsoft.com/en-us/azure/virtual-machines/hbv3-series) is available as a spot instance in US-East for just $0.36 an hour. Keep in mind that has a 10Gb NIC in addition to two 1TB NVME drives. It's also server-grade hardware so no need to chase down segfaults caused by the motherboard melting your nice chips :) | 12:54:53 |
connor (burnt/out) (UTC-8) | I mean seriously, just in troubleshooting stability issues yesterday I got frustrated and got new RAM for all my machines. That was about $1000 -- that would have bought me ~2,777h of the HBv3 as a spot instance. | 12:58:17 |
SomeoneSerge (matrix works sometimes) | >>> magma_compute_hours = (19.5 / 60) * 24 * 2 # 24 hyper-threading cores
>>> 2777 / magma_compute_hours
178.0128205128205
After about 180 magma builds azure will have costed more than your RAM š¤ | 18:46:46 |
SomeoneSerge (matrix works sometimes) | * >>> magma_compute_hours = (19.5 / 60) * 24 * 2 # 24 hyper-threading cores
>>> 2777 / magma_compute_hours
178.0128205128205
After about 180 magma builds azure will have costed more than your RAM, and I think we build several magmas a day š¤ | 18:47:38 |
SomeoneSerge (matrix works sometimes) | * >>> magma_compute_hours = (19.5 / 60) * 24 * 2 # 24 hyper-threading cores
>>> 2777 / magma_compute_hours
178.0128205128205
AFAIU after about 180 magma builds azure will have costed more than your RAM, and I think we build several magmas a day š¤ | 18:47:59 |
| 19 May 2024 |
connor (burnt/out) (UTC-8) | Correction since the i9-13900k has 32 cores in total, some are hyper-threaded and others are not
>>> magma_compute_hours = (19.5 / 60) * 32 # 32 "cores"
>>> 2777 / magma_compute_hours
267.01923076923
| 01:36:25 |
connor (burnt/out) (UTC-8) | However, that assumes it takes magma the same amount of time to build on an i9-13900k as it does on the HBv3 (it does not) | 01:36:50 |
aidalgol | nvidia-smi is reporting 0% GPU usage even when I am running a game and I can hear my card's fans speed up. Is it reporting correctly for anyone else? | 09:47:55 |
aidalgol | It sounds exactly like this: https://forums.developer.nvidia.com/t/nvidia-smi-reporting-0-gpu-utilization/261878 | 09:48:51 |
connor (burnt/out) (UTC-8) | I can try to do a thing on my GPU in a bit and see what happens | 13:33:30 |
connor (burnt/out) (UTC-8) | ahahaha okay well...
python3.11-nix-cuda-test> Running phase: pythonRuntimeDepsCheckHook
python3.11-nix-cuda-test> Executing pythonRuntimeDepsCheck
python3.11-nix-cuda-test> Checking runtime dependencies for nix_cuda_test-0.1.0-py3-none-any.whl
python3.11-nix-cuda-test> - torchvision>=0.15.0 not satisfied by version 0.18.0a0
| 13:46:37 |
connor (burnt/out) (UTC-8) | so now I guess that needs to be fixed | 13:46:46 |
connor (burnt/out) (UTC-8) | I don't have experience with Python's packaging so I'm not sure how this is implemented: https://github.com/NixOS/nixpkgs/blob/4e6ae832dcc55a3d8c0b05504548524f297f7ed5/pkgs/development/interpreters/python/hooks/python-runtime-deps-check-hook.py#L81-L85 | 13:51:44 |
GaƩtan Lepage | Ok ! Thanks for the details ! | 13:54:29 |
GaƩtan Lepage | You have other usage for storage than nix builds right ? | 13:55:08 |
connor (burnt/out) (UTC-8) | Ah yeah definitely!
I'm really into multi-frame super resolution so I've been trying to start aggregating photography I've done to turn it into a dataset | 13:56:32 |
connor (burnt/out) (UTC-8) | I've also got a Light L16 I want to use to create a dataset, and a Lytro Illum because I thought it could be neat to see what I can do with a plenoptic camera | 13:57:07 |
connor (burnt/out) (UTC-8) | UGH https://github.com/pytorch/vision/blob/v0.18.0/version.txt | 13:58:36 |
GaƩtan Lepage | Oh I see !
At first, I looked at old MB/CPU combos on ebay (Epyc) but they are
- DDR4
- not "that" cheap
- slower than more modern chips
Lately I was more looking at the Threadripper 7960x | 13:58:38 |
GaƩtan Lepage | But it's quite expensive, and the MB too | 13:58:56 |
connor (burnt/out) (UTC-8) | They left it as 0.18.0a0 in version.txt | 13:58:57 |
GaƩtan Lepage | * But it's quite expensive, and the MBs too | 13:59:01 |
connor (burnt/out) (UTC-8) | Oof yeah any of the workstation-grade chips are very expensive | 13:59:25 |
connor (burnt/out) (UTC-8) | I didn't realize how dump Nix's remote build protocol is in terms of scheduling (doesn't take advantage of data locality, keep records of build times of pervious versions of packages with that name to decide how to allocate, etc.) so I thought scaling out would be better than scaling up | 14:00:14 |
connor (burnt/out) (UTC-8) | nixbuild.net is doing amazing stuff with respect to scaling out though -- they've re-implemented the nix remote build protocol and so while their endpoint presents itself as a single monolithic machine, one the backend they're able to scale up and down instances as needed | 14:01:50 |
connor (burnt/out) (UTC-8) | hexa (UTC+1): sorry for the @ -- any ideas if the above failure (last four messages) is by design? I'm not familiar with packaging but I saw you contributed the hook doing the version check. I'd just like to know whether I should tell upstream or patch in-tree. | 14:04:53 |
hexa (UTC+1) | the upstream package pins that version | 14:05:44 |
hexa (UTC+1) | and we provide something that doesn't match that constraint | 14:06:00 |
GaƩtan Lepage | Ok ! So is what tier would you think is the most interesting for a builder: consumer, HEDT or pro ? | 14:17:57 |
connor (burnt/out) (UTC-8) | Ah it's because pre-releases aren't allowed by default right | 14:18:05 |