!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

290 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
17 Jun 2024
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @grw00:matrix.org

hey all, has anyone had success using cuda libraries inside a docker container built with nix? i don't mean running a cuda container on nixos host but the opposite, running a nix container containing cuda program on another host
i build a container with nix and pytorch etc and run it on runpod, it doesnt see nvidia drivers/device though, i guess i am missing something. currently i have:

        dockerImages.default = pkgs.dockerTools.streamLayeredImage {
          name = "ghcr.io/my-image";
          tag = "latest";

          contents = [
            pkgs.bash
            pkgs.uutils-coreutils-noprefix
            pkgs.cacert
            pkgs.libnvidia-container

            pythonEnv
          ];

          config = {
            Cmd = [ "${pkgs.bash}/bin/bash" ];
            Env = [
              "CUDA_PATH=${pkgs.cudatoolkit}"
              "LD_LIBRARY_PATH=${pkgs.linuxPackages_5_4.nvidia_x11}/lib"
            ];
          };
        };
Hard coding linuxPackages in the image is a bad idea. With cuda you normally don't want drivers in the image, you want the host's drivers mounted in the containet
12:33:17
@ss:someonex.netSomeoneSerge (back on matrix)No need for libnvidia-container in the imahe either i think12:34:17
@grw00:matrix.orggrw00
In reply to @ss:someonex.net
Hard coding linuxPackages in the image is a bad idea. With cuda you normally don't want drivers in the image, you want the host's drivers mounted in the containet
ah kk, got it. i'm specifically trying to use this on runpod.io, i don't think they offer this as a possibility. it seems like the images they offer all have cuda installed in image
12:35:10
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @grw00:matrix.org
not sure what CDI is, i understand i need the /run/opengl-driver but i'm not sure how to achieve that in docker container
CDI is the new thing where you can specify where to mount things in the containers in a json file
12:36:28
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @grw00:matrix.org
ah kk, got it. i'm specifically trying to use this on runpod.io, i don't think they offer this as a possibility. it seems like the images they offer all have cuda installed in image
They have to have a driver on the host, it's separate from the cuda toolkit
12:37:40

Show newer messages


Back to Room ListRoom Version: 9