!fXpAvneDgyJuYMZSwO:nixos.org

Nix Data Science

218 Members
54 Servers

Load older messages


SenderMessageTime
2 Jul 2024
@brodriguesco:matrix.orgBruno Rodrigues I removed the unpackPhase now and simply use dpkg -x in the installPhase, but I get the same issue 09:27:10
@jb:vk3.wtfjbedoMight be the suid bit10:03:10
@jb:vk3.wtfjbedo The store doesn’t allow suid so packaging won’t be straightforward 10:11:28
@jb:vk3.wtfjbedo I might be a bit old fashioned but suid for a text editor seems ludicrous 10:12:14
@brodriguesco:matrix.orgBruno Rodriguesit’s not possible to change the permissions during the unpackPhase for example?10:25:07
@brodriguesco:matrix.orgBruno Rodriguesmaybe I can look into this: https://github.com/NixOS/nixpkgs/blob/88d829e52cfbeee71d81704ce28f5b439f6dea16/nixos/modules/security/chromium-suid-sandbox.nix#L1410:28:13
@brodriguesco:matrix.orgBruno RodriguesI've reached the limits of my knowledge. Likely it will have to be built from source then, but I'm too happy on spacemacs to spend the time trying to do that :D11:45:05
@kupac:matrix.orgkupacIt's in experimental phase, so we don't have to rush with the packaging imo. We can file a bug report upstream about the suid and wait for it to run its course.13:40:38
@kupac:matrix.orgkupac* It's in experimental phase, so we don't have to rush with the packaging imo. You can file a bug report upstream about the suid and wait for it to run its course.13:40:54
@janik0:matrix.org@janik0:matrix.org left the room.13:54:17
@brodriguesco:matrix.orgBruno Rodriguesyou're right, I wanted to try to package it more as a learning exercise. Well it was worth it, because I've learned about suid!14:25:03
@brodriguesco:matrix.orgBruno RodriguesRedacted or Malformed Event14:28:26
@brodriguesco:matrix.orgBruno Rodrigues
In reply to @kupac:matrix.org
It's in experimental phase, so we don't have to rush with the packaging imo. You can file a bug report upstream about the suid and wait for it to run its course.
but would you say it's an issue? they're likely doing that for sandboxing
14:41:32
@alexoo:matrix.orgAlexo joined the room.16:16:22
3 Jul 2024
@anjannath:matrix.org@anjannath:matrix.org joined the room.18:01:08
4 Jul 2024
@monadam:matrix.orgmonadam joined the room.00:25:49
6 Jul 2024
@anjannath:matrix.org@anjannath:matrix.org left the room.04:50:52
@jeroenvb3:matrix.orgjeroenvb3 joined the room.23:43:32
@jeroenvb3:matrix.orgjeroenvb3Hi there, could someone point me to up to date documentation for getting python to work with cuda? I have tried some things but I don't know what the best/official way is that I should start with. Thanks.23:54:01
7 Jul 2024
@ss:someonex.netSomeoneSerge (utc+3) import <nixpkgs> { config.cudaSupport = true } 00:59:29
@1h0:matrix.org@1h0:matrix.org left the room.08:53:13
@jeroenvb3:matrix.orgjeroenvb3

Could you please give me a bit more information? I added it to a shell.nix which seemed promising:

# Run with `nix-shell cuda-shell.nix`
# { pkgs ? import <nixpkgs> {} }:
with import <nixpkgs> {
  config = {
    allowUnfree = true;
    cudaSupport = true;
  };
};

pkgs.mkShell {
   name = "cuda-env-shell";
   buildInputs = with pkgs; [
     git gitRepo gnupg autoconf curl
     procps gnumake util-linux m4 gperf unzip
     cudatoolkit linuxPackages.nvidia_x11
     libGLU libGL
     xorg.libXi xorg.libXmu freeglut
     xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib 
     ncurses5 stdenv.cc binutils
     python39
     python39Packages.numpy
     python39Packages.numba
     libstdcxx5
     cudaPackages_11.cudatoolkit
   ];
   shellHook = ''
      export CUDA_PATH=${pkgs.cudatoolkit}
      # export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
      export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
      export EXTRA_CCFLAGS="-I/usr/include"
      export LD_LIBRARY_PATH=${pkgs.cudaPackages_11.cudatoolkit}/lib:$LD_LIBRARY_PATH
      export NUMBAPRO_NVVM=${pkgs.cudaPackages_11.cudatoolkit}/nvvm/lib64/libnvvm.so
      export NUMBAPRO_LIBDEVICE=${pkgs.cudaPackages_11.cudatoolkit}/nvvm/libdevice
   '';          
}

so when I go into that shell, create a python virtualenv, go into it, install numba. And try to run:

from numba import cuda
print(cuda.detect())

It gives me:

Numba Version: 0.59.1
Traceback (most recent call last):
  File "/tmp/cuda/test.py", line 39, in <module>
    main()
  File "/tmp/cuda/test.py", line 19, in main
    print_version_info()
  File "/tmp/cuda/test.py", line 7, in print_version_info
    print("CUDA Version:", cuda.runtime.get_version())
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 111, in get_version
    self.cudaRuntimeGetVersion(ctypes.byref(rtver))
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 81, in safe_cuda_api_call
    self._check_error(fname, retcode)
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 89, in _check_error
    raise CudaRuntimeAPIError(retcode, msg)
numba.cuda.cudadrv.runtime.CudaRuntimeAPIError: [34] Call to cudaRuntimeGetVersion results in CUDA_ERROR_STUB_LIBRARY

I would not mind reading more, but I can't find good sources for nixos cuda support to jump off from. Thanks again.

09:28:09
@ss:someonex.netSomeoneSerge (utc+3)
In reply to @jeroenvb3:matrix.org

Could you please give me a bit more information? I added it to a shell.nix which seemed promising:

# Run with `nix-shell cuda-shell.nix`
# { pkgs ? import <nixpkgs> {} }:
with import <nixpkgs> {
  config = {
    allowUnfree = true;
    cudaSupport = true;
  };
};

pkgs.mkShell {
   name = "cuda-env-shell";
   buildInputs = with pkgs; [
     git gitRepo gnupg autoconf curl
     procps gnumake util-linux m4 gperf unzip
     cudatoolkit linuxPackages.nvidia_x11
     libGLU libGL
     xorg.libXi xorg.libXmu freeglut
     xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib 
     ncurses5 stdenv.cc binutils
     python39
     python39Packages.numpy
     python39Packages.numba
     libstdcxx5
     cudaPackages_11.cudatoolkit
   ];
   shellHook = ''
      export CUDA_PATH=${pkgs.cudatoolkit}
      # export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib:${pkgs.ncurses5}/lib
      export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
      export EXTRA_CCFLAGS="-I/usr/include"
      export LD_LIBRARY_PATH=${pkgs.cudaPackages_11.cudatoolkit}/lib:$LD_LIBRARY_PATH
      export NUMBAPRO_NVVM=${pkgs.cudaPackages_11.cudatoolkit}/nvvm/lib64/libnvvm.so
      export NUMBAPRO_LIBDEVICE=${pkgs.cudaPackages_11.cudatoolkit}/nvvm/libdevice
   '';          
}

so when I go into that shell, create a python virtualenv, go into it, install numba. And try to run:

from numba import cuda
print(cuda.detect())

It gives me:

Numba Version: 0.59.1
Traceback (most recent call last):
  File "/tmp/cuda/test.py", line 39, in <module>
    main()
  File "/tmp/cuda/test.py", line 19, in main
    print_version_info()
  File "/tmp/cuda/test.py", line 7, in print_version_info
    print("CUDA Version:", cuda.runtime.get_version())
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 111, in get_version
    self.cudaRuntimeGetVersion(ctypes.byref(rtver))
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 81, in safe_cuda_api_call
    self._check_error(fname, retcode)
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/runtime.py", line 89, in _check_error
    raise CudaRuntimeAPIError(retcode, msg)
numba.cuda.cudadrv.runtime.CudaRuntimeAPIError: [34] Call to cudaRuntimeGetVersion results in CUDA_ERROR_STUB_LIBRARY

I would not mind reading more, but I can't find good sources for nixos cuda support to jump off from. Thanks again.

In this instance you have two nixpkgs instance, pkgs without cuda support, and another implicit one in the top-level with expression. In buildInputs = with pkgs; [ ... ] you're still using packages from the no-cuda instance
09:33:10
@ss:someonex.netSomeoneSerge (utc+3)Note that you don't need cudatoolkit in mkShells unless you're setting up a c++/cuda dev environment09:33:49
@ss:someonex.netSomeoneSerge (utc+3) You don't need linuxPackages.nvidia_x11 in mkShells, it'll only break things 09:34:31
@ss:someonex.netSomeoneSerge (utc+3)

numba.cuda.cudadrv.runtime.CudaRuntimeAPIError: [34] Call to cudaRuntimeGetVersion results in CUDA_ERROR_STUB_LIBRARY

COmes from a symlink to the stub library in ${cudatoolkit}/lib/libcuda.so which you've listed in LD_LIBRARY_PATH and thus gave it a higher priority than the path to the real driver recorded in libraries' and executables' headers already

09:35:59
@ss:someonex.netSomeoneSerge (utc+3) Note: the symlink at ${cudatoolkit}/lib/libcuda.so was removed in a recent PR and this particular error will go away once it reaches nixos-unstable. NOnetheless, you don't usually need cudatoolkit in LD_LIBRARY_PATH 09:37:11
@dminca:matrix.orgdminca joined the room.09:37:45
@ss:someonex.netSomeoneSerge (utc+3) * In this instance you have two nixpkgs instances, pkgs without cuda support, and another implicit one in the top-level with expression. In buildInputs = with pkgs; [ ... ] you're still using packages from the no-cuda instance 09:38:59
@jeroenvb3:matrix.orgjeroenvb3

Thank you very much. The previously mentioned file does indeed run succesfully now. I did want to enable cudatoolkit I'm pretty sure. This is what I am now trying to get running:

from numba import cuda
import numpy as np

@cuda.jit
def cudakernel0(array):
    for i in range(array.size):
        array[i] += 0.5

array = np.array([0, 1], np.float32)
print('Initial array:', array)

print('Kernel launch: cudakernel0[1, 1](array)')
cudakernel0[1, 1](array)

print('Updated array:',array)

Which has this as the first error:

Initial array: [0. 1.]
Kernel launch: cudakernel0[1, 1](array)
/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/dispatcher.py:536: NumbaPerformanceWarning: Grid size 1 will likely result in GPU under-utilization due to low occupancy.
  warn(NumbaPerformanceWarning(msg))
Traceback (most recent call last):
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/nvvm.py", line 139, in __new__
    inst.driver = open_cudalib('nvvm')
  File "/nix/store/7m7c6crkdbzmzcrbwa4l4jqgnwj8m92b-python3.9-numba-0.59.1/lib/python3.9/site-packages/numba/cuda/cudadrv/libs.py", line 64, in open_cudalib
    return ctypes.CDLL(path)
  File "/nix/store/2j0l3b15gas78h9akrsfyx79q02i46hc-python3-3.9.19/lib/python3.9/ctypes/__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libnvvm.so: cannot open shared object file: No such file or directory

However, I do have the related env vars set:

[nix-shell:/tmp/cuda]$ echo $NUMBAPRO_NVVM
/nix/store/0c8nf26hx9x9jxgj0s9bq10xg75nbfv0-cuda-merged-12.2/nvvm/lib64/libnvvm.so

[nix-shell:/tmp/cuda]$ echo $NUMBAPRO_LIBDEVICE
/nix/store/0c8nf26hx9x9jxgj0s9bq10xg75nbfv0-cuda-merged-12.2/nvvm/libdevice

A stackoverflow says those are outdated and CUDA_HOME needs to be set. I do set it to the same as CUDA_PATH, but it doesn't seem to help. http://numba.pydata.org/numba-doc/latest/cuda/overview.html#setting-cuda-installation-path talks about ignoring minor version paths, but I don't think that is when I directly set it. I also don't have any non-minor version paths in /nix/store/

This is now my shell.nix:

with import <nixpkgs> {
  config = {
    allowUnfree = true;
    cudaSupport = true;
  };
};

pkgs.mkShell {
   name = "cuda-env-shell";
   buildInputs = [
     git gitRepo gnupg autoconf curl
     procps gnumake util-linux m4 gperf unzip
     libGLU libGL
     xorg.libXi xorg.libXmu freeglut
     xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib 
     ncurses5 stdenv.cc binutils
     python39
     python39Packages.numpy
     python39Packages.numba
     libstdcxx5
     cudaPackages.cudatoolkit
   ];
   shellHook = ''
      export CUDA_PATH=${pkgs.cudaPackages.cudatoolkit}
      export CUDA_HOME=${pkgs.cudatoolkit}
      export EXTRA_CCFLAGS="-I/usr/include"
      export NUMBAPRO_NVVM=${pkgs.cudatoolkit}/nvvm/lib64/libnvvm.so
      export NUMBAPRO_LIBDEVICE=${pkgs.cudatoolkit}/nvvm/libdevice
   '';          
}

I'm sorry if its a lot to ask, but I would really like to learn about this and get it working. Do you see anything wrong with my stuff still?

15:29:13

Show newer messages


Back to Room ListRoom Version: 6