!kFJOpVCFYFzxqjpJxm:nixos.org

Nix HPC

85 Members
Nix for High Perfomance Computing clusters20 Servers

Load older messages


SenderMessageTime
5 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) *

Btw

❯ strace /run/current-system/sw/bin/nvidia-container-cli "--user" "configure" "--no-cgroups" "--device=all" "--compute" "--utility" "--ldconfig=@/run/current-system/sw/bin/ldconfig" "/nix/store/rzycmg66zpap6gjb5ylmvd8ymlfb7fag-apptainer-1.2.5/var/lib/apptainer/mnt/session/final"
...
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, NULL) = 0
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, {effective=0, permitted=0, inheritable=1<<CAP_WAKE_ALARM}) = 0
capget({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, NULL) = 0
capset({version=_LINUX_CAPABILITY_VERSION_3, pid=0}, {effective=0, permitted=1<<CAP_CHOWN|1<<CAP_DAC_OVERRIDE|1<<CAP_DAC_READ_SEARCH|1<<CAP_FOWNER|1<<CAP_KILL|1<<CAP_SETGID|1<<CAP_SETUID|1<<CAP_SETPCAP|1<<CAP_NET_ADMIN|1<<CAP_SYS_CHROOT|1<<CAP_SYS_PTRACE|1<<CAP_SYS_ADMIN|1<<CAP_MKNOD, inheritable=1<<CAP_WAKE_ALARM}) = -1 EPERM (Operation not permitted)
write(2, "nvidia-container-cli: ", 22nvidia-container-cli: )  = 22
write(2, "permission error: capability cha"..., 67permission error: capability change failed: operation not permitted) = 67
write(2, "\n", 1
)                       = 1
exit_group(1)                           = ?
+++ exited with 1 +++
...
nvidia-container-cli: permission error: capability change failed: operation not permitted

Was this even supposed to work without root?

21:04:36
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) *

Propagating --debug to nvidia-container-cli: https://gist.github.com/SomeoneSerge/a4317ccec07e33324c588eb6f7c6f04a#file-gistfile0-txt-L310

Which is the same as if you manually run https://matrix.to/#/%23hpc%3Anixos.org/%24aCLdJvRqyXSNc0_LfuTb7tFxL3hBhMfXUzc13whct0U?via=someonex.net&via=matrix.org&via=kde.org&via=dodsorf.as

Like, did it even require CAP_SYS_ADMIN before?

21:59:19
7 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)Filed the issues upstream finally (apptainer and libnvidia-docker). Thought I'd never get around to do that, I feel exhausted smh\01:04:08
8 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) changed their display name from SomeoneSerge (UTC+2) to SomeoneSerge (hash-versioned python modules when).04:50:14
9 Jan 2024
@dguibert:matrix.orgDavid Guibert joined the room.14:58:17
10 Jan 2024
@shamrocklee:matrix.orgShamrockLee (Yueh-Shun Li)I'm terribly busy the following weeks, and probably don't have time until the end of January.16:21:32
@shamrocklee:matrix.orgShamrockLee (Yueh-Shun Li)* I'll be terribly busy the following weeks, and probably don't have time until the end of January.16:21:58
11 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) Merged the apptainer --nv patch. Still no idea what on earth could've broken docker run --gpus all. Going to look into the mpi situation again, as far as I'm concerned it's totally broken but maybe I just don't get it 01:04:11
17 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) A very typical line from Nixpkgs' SLURM's build logs: -g -O2 ... -ggdb3 -Wall -g -O1 17:25:26
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)What's there not to love about autotools17:25:41
@connorbaker:matrix.orgconnor (he/him) (UTC-5)Thanks, I hate it21:03:22
18 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)
❯ ag eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee result-lib/ --search-binary
result-lib/lib/security/pam_slurm_adopt.la
41:libdir='/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-slurm-23.11.1.1/lib/security'

result-lib/lib/perl5/5.38.2/x86_64-linux-thread-multi/perllocal.pod
7:C<installed into: /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-slurm-23.11.1.1/lib/perl5/site_perl/5.38.2>
29:C<installed into: /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-slurm-23.11.1.1/lib/perl5/site_perl/5.38.2>

Binary file result-lib/lib/libslurm.so.40.0.0 matches.

result-lib/lib/security/pam_slurm.la
41:libdir='/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-slurm-23.11.1.1/lib/security'

Binary file result-lib/lib/slurm/libslurmfull.so matches.

Binary file result-lib/lib/slurm/mpi_pmi2.so matches.

Binary file result-lib/lib/slurm/libslurm_pmi.so matches
❯ strings result-lib/lib/slurm/mpi_pmi2.so | rg eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-slurm-23.11.1.1/bin/srun

arghhghhhghghghghghggh why

15:32:31
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) Context: error: cycle detected in build of '/nix/store/391cjl6zqqsaz33disfcn3nzv87bygc1-slurm-23.11.1.1.drv' in the references of output 'bin' from output 'lib' 15:34:41
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)Just as mpich and openmpi aren't amenable to splitting their outputs (can't just link the library but must keep the executables in the runtime closure for no good reason), neither is slurm apparently15:35:29
19 Jan 2024
@markuskowa:matrix.orgmarkuskowa SomeoneSerge (hash-versioned python modules when): I have managed to split the dev outputs of the mpi implementations. I will open a PR soon. 09:34:16
@ss:someonex.netSomeoneSerge (hash-versioned python modules when) WOW! What did you do to the config.h? 10:15:59
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)I managed to make slurm build libpmi2.so and to split it out into a separate output last night10:16:18
22 Jan 2024
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)

markuskowa

Linking slurm's libpmi2 seems to kind of work at aalto:

❯ ssh triton srun -N3 --mpi=pmi2 singularity exec cpi.sif cpi
srun: job 27525153 queued and waiting for resources
srun: job 27525153 has been allocated resources
...
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   csl13
  Local device: mlx5_0
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   csl2
  Local device: mlx5_0
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   csl11
  Local device: mlx5_0
--------------------------------------------------------------------------
Process 1 of 3 is on csl11.int.triton.aalto.fi
Process 2 of 3 is on csl13.int.triton.aalto.fi
Process 0 of 3 is on csl2.int.triton.aalto.fi
pi is approximately 3.1415926544231318, Error is 0.0000000008333387
wall clock time = 0.043120
23:46:54
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)NO SEGFAULTS23:46:58
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)(still a ton of memory leaks reported by asan though)23:50:53
@ss:someonex.netSomeoneSerge (hash-versioned python modules when)(idk, maybe leaks are a feature of mpi and I should ignore this)23:51:23
28 Jan 2024
@remcoschrijver:tchncs.deRemco Schrijver joined the room.22:50:55
31 Jan 2024
@federicodschonborn:matrix.orgFederico Damián Schonborn changed their profile picture.03:36:47
@federicodschonborn:matrix.orgFederico Damián Schonborn changed their profile picture.06:22:22
18 Feb 2024
@nscnt:matrix.orgnscnt joined the room.07:31:58
5 Mar 2024
@nscnt:matrix.orgnscnt left the room.18:33:31
14 Mar 2024
@federicodschonborn:matrix.orgFederico Damián Schonborn left the room.02:04:21
@mjolnir:nixos.orgNixOS Moderation Botchanged room power levels.18:44:37
15 Mar 2024
@spacesbot:nixos.devspacesbot - keeps a log of public NixOS channels joined the room.04:05:00
@grahamc:nixos.org@grahamc:nixos.org joined the room.23:16:51

There are no newer messages yet.


Back to Room ListRoom Version: 9