| 17 Mar 2023 |
mjlbach | Thanks! This is how I used to do things, but one issue is that it's locked to whatever the latest is in nixpkgs.
Btw, how do you avoid cache misses? Is there a list of what cuda-maintainers is providing CI for? | 16:05:13 |
mjlbach | Ahh I see why you had to do the nvidia driver pinning, thats a bit unfortunate | 16:26:21 |
mjlbach | Is there a reason you didn't opt for setting the LD_LIBRARY_PATH directly? | 16:34:41 |
mjlbach | {
description = "A very basic for pytorch support";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
};
nixConfig = {
# Add the CUDA maintainer's cache
extra-substituters = [
"https://nix-community.cachix.org"
"https://cuda-maintainers.cachix.org"
];
extra-trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
];
};
outputs = { self, nixpkgs, nixgl }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config = {
allowUnfree = true;
cudaEnabled = true;
cudaCapabilities = [ "8.6" ];
cudaForwardCompat = true;
};
};
my-python-packages = p: with p; [
(pytorch.override { cudaSupport = true; })
];
in
{
devShell.${system} = pkgs.mkShell {
packages = with pkgs; [
(python310.withPackages my-python-packages)
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
};
};
}
| 16:34:45 |
mjlbach | What would be the best way to collate information? If I create a repo with template flakes people can pull from? A new wiki page? | 16:54:57 |
| 18 Mar 2023 |
Kevin Mittman (UTC-7) | EOL is complicated | 00:20:35 |
Kevin Mittman (UTC-7) | I was running Blender on a GTX 650 (Kepler). 470 driver still gets updates but max cuda toolkit was 10.2 | 00:23:24 |
connor (he/him) | In reply to @atrius:matrix.org Is there a reason you didn't opt for setting the LD_LIBRARY_PATH directly? I actually wasn’t able to get everything working without using NixGL — LD Library Path wasn’t enough, I also had to use LD Preload to make sure some stuff was loaded before the CUDA install I had on my system. I also tried symlinking my CUDA lib into /run/opengl (or whatever it was) but I still ran into issues. NixGL fixed all those for me (I usually run Fedora betas so who knows what was going wrong).
I don’t mind the driver pining — without that, the flake is impure / not reproducible. | 00:27:16 |
mjlbach | I run fedora too, but it worked with that above flake for me | 00:29:00 |
connor (he/him) | Is pytorch an alias? I thought in Nix it was just torch. Also, does it work without specifying cudaSupport in the override and instead specifying it in config (instead of using cudaEnabled)? | 00:32:51 |
SomeoneSerge (matrix works sometimes) | It used to be just pytorch. It was renamed, and old name is now assigned the alias. I think | 00:33:46 |
SomeoneSerge (matrix works sometimes) |
does it work without specifying cudaSupport
You mean (import <nixpkgs> { config.cudaSupport = true; }).python3Packages.torch? Sure
| 00:34:39 |
mjlbach | FYI it doesn't work for me without overriding pytorch specifically | 00:40:04 |
SomeoneSerge (matrix works sometimes) | This doesn't sound right o_0 | 00:40:25 |
mjlbach | warning: Git tree '/home/michael/Repositories/nix-tests/nix-shell' is dirty
(nix:nix-shell-env) [michael@fedora nix-shell]$ python
Python 3.10.10 (main, Feb 7 2023, 12:19:31) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
>>>
| 00:41:05 |
mjlbach | Feel free to try the above flake | 00:42:58 |
SomeoneSerge (matrix works sometimes) | In reply to @atrius:matrix.org I run fedora too, but it worked with that above flake for me Ah, right, that's why you set LD_LIBRARY_PATH | 00:50:22 |
SomeoneSerge (matrix works sometimes) | In reply to @atrius:matrix.org
{
description = "A very basic for pytorch support";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
};
nixConfig = {
# Add the CUDA maintainer's cache
extra-substituters = [
"https://nix-community.cachix.org"
"https://cuda-maintainers.cachix.org"
];
extra-trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
];
};
outputs = { self, nixpkgs, nixgl }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config = {
allowUnfree = true;
cudaEnabled = true;
cudaCapabilities = [ "8.6" ];
cudaForwardCompat = true;
};
};
my-python-packages = p: with p; [
(pytorch.override { cudaSupport = true; })
];
in
{
devShell.${system} = pkgs.mkShell {
packages = with pkgs; [
(python310.withPackages my-python-packages)
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
};
};
}
It's cudaSupport in config, btw. But beside the point | 00:50:57 |
SomeoneSerge (matrix works sometimes) | In reply to @atrius:matrix.org
warning: Git tree '/home/michael/Repositories/nix-tests/nix-shell' is dirty
(nix:nix-shell-env) [michael@fedora nix-shell]$ python
Python 3.10.10 (main, Feb 7 2023, 12:19:31) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
>>>
Hmm, how about LD_DEBUG=libs python -c "import torch; torch.cuda.is_available()" | gh gist create -? | 00:52:20 |
mjlbach | No debugging necessary, cudaSupport was the issue :) | 00:53:33 |
mjlbach | What are the default capabilities/cudaForewardCompat options being built and pushed to cachix? | 00:53:55 |
mjlbach | Would be good to document them | 00:53:59 |
SomeoneSerge (matrix works sometimes) | Waaait, but you do override cudaSupport in pytorch? | 00:54:00 |
SomeoneSerge (matrix works sometimes) | That should've been sufficient | 00:54:07 |
mjlbach | {
description = "A very basic flake for pytorch support";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
};
nixConfig = {
# Add the CUDA maintainer's cache
extra-substituters = [
"https://nix-community.cachix.org"
"https://cuda-maintainers.cachix.org"
];
extra-trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
];
};
outputs = { self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config = {
allowUnfree = true;
cudaSupport = true;
# cudaCapabilities = [ "8.6" ];
cudaForwardCompat = true;
};
};
my-python-packages = p: with p; [
torch
];
in
{
devShell.${system} = pkgs.mkShell {
packages = with pkgs; [
(python310.withPackages my-python-packages)
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
};
};
}
| 00:54:11 |
mjlbach | * {
description = "A very basic flake for pytorch support";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
};
nixConfig = {
# Add the CUDA maintainer's cache
extra-substituters = [
"https://nix-community.cachix.org"
"https://cuda-maintainers.cachix.org"
];
extra-trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
];
};
outputs = { self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config = {
allowUnfree = true;
cudaSupport = true;
# cudaCapabilities = [ "8.6" ];
cudaForwardCompat = true;
};
};
my-python-packages = p: with p; [
torch
];
in
{
devShell.${system} = pkgs.mkShell {
packages = with pkgs; [
(python310.withPackages my-python-packages)
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export EXTRA_LDFLAGS="-L/lib -L${pkgs.linuxPackages.nvidia_x11}/lib"
export EXTRA_CCFLAGS="-I/usr/include"
'';
};
};
}
| 00:54:16 |
mjlbach | That works | 00:54:17 |
mjlbach | It worked when I overrode cuda support in pytorch | 00:54:30 |
SomeoneSerge (matrix works sometimes) | In reply to @atrius:matrix.org What are the default capabilities/cudaForewardCompat options being built and pushed to cachix? They're kind of a new thing and it was my fault that default capabilities' cache got out of date... | 00:54:35 |
SomeoneSerge (matrix works sometimes) | But yes, you're right | 00:54:42 |