12 Mar 2022 |
| chris joined the room. | 08:32:01 |
13 Mar 2022 |
| Samuel Ainsworth joined the room. | 05:46:34 |
| josw joined the room. | 08:08:53 |
| mcwitt joined the room. | 22:16:33 |
17 Mar 2022 |
SomeoneSerge (UTC+U[-12,12]) | Samuel Ainsworth: hey there | 20:19:15 |
SomeoneSerge (UTC+U[-12,12]) | So, I presume this would be the appropriate channel to coordinate CUDA things? | 20:19:29 |
20 Mar 2022 |
| David Guibert joined the room. | 15:12:02 |
21 Mar 2022 |
Samuel Ainsworth | Someone S: Let's create our own channel just for the sake of having our own space. I don't want to spam data science folks with things that may not be directly relevant for them | 22:29:30 |
Samuel Ainsworth | How do we create a matrix room on the nixos.org domain? | 22:30:47 |
jbedo | ask in #matrix-suggestions:nixos.org | 22:31:19 |
22 Mar 2022 |
| stites joined the room. | 02:05:37 |
SomeoneSerge (UTC+U[-12,12]) | #cuda:nixos.org Samuel Ainsworth | 13:29:31 |
29 Mar 2022 |
| FRidh joined the room. | 08:32:32 |
3 Apr 2022 |
| lunik1 changed their profile picture. | 23:13:09 |
8 Apr 2022 |
| Carl Thomé joined the room. | 15:12:59 |
Carl Thomé | Hello hello! Anybody working with deep learning on audio data here? 🤗 | 15:14:48 |
Samuel Ainsworth | In reply to @carlthome:matrix.org Hello hello! Anybody working with deep learning on audio data here? 🤗 I'm not sure about audio data specifically, but I know that a few people are using NixOS for DL research | 23:15:06 |
9 Apr 2022 |
FRidh | In reply to @carlthome:matrix.org Hello hello! Anybody working with deep learning on audio data here? 🤗 Working with audio here and some ML but not DL. On the to do list :) | 06:52:56 |
16 Apr 2022 |
| Hilmar (he/him) joined the room. | 23:12:40 |
17 Apr 2022 |
| Yuu Yin joined the room. | 04:23:53 |
Yuu Yin | anyone who has a flake for pytorch + cuda development? or it could be a shell.nix too | 06:30:27 |
Yuu Yin | {
description = "Pytorch and stuff";
# Specifies other flakes that this flake depends on.
inputs = {
devshell.url = "github:numtide/devshell";
utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
};
# Function that produces an attribute set.
# Its function arguments are the flakes specified in inputs.
# The self argument denotes this flake.
outputs = inputs@{ self, nixpkgs, utils, ... }:
(utils.lib.eachSystem [ "x86_64-linux" ] (system:
let
pkgs = (import nixpkgs {
inherit system;
config = {
# For CUDA.
allowUnfree = true;
# Enables CUDA support in packages that support it.
cudaSupport = true;
};
});
in rec {
# Executed by `nix build .#<name>`
# Ignore this, it was just for testing.
packages = utils.lib.flattenTree {
hello = pkgs.hello;
};
# Executed by `nix build .`
defaultPackage = packages.hello;
# defaultPackage = pkgs.callPackage ./default.nix { };
# Executed by `nix develop`
devShell = with pkgs; mkShell {
buildInputs = [
python39 # numba-0.54.1 not supported for interpreter python3.10
] ++ (with python39.pkgs; [
inflect
librosa
pip
pytorch-bin
unidecode
]) ++ (with cudaPackages; [
cudatoolkit
]);
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
'';
};
}
));
}
| 14:52:09 |
| Gaétan Lepage joined the room. | 22:00:17 |
| Gaétan Lepage left the room. | 22:02:06 |
18 Apr 2022 |
SomeoneSerge (UTC+U[-12,12]) | In reply to @yuu:matrix.org anyone who has a flake for pytorch + cuda development? or it could be a shell.nix too Do you mean pytorch extensions? | 14:03:22 |
SomeoneSerge (UTC+U[-12,12]) | In reply to @yuu:matrix.org
{
description = "Pytorch and stuff";
# Specifies other flakes that this flake depends on.
inputs = {
devshell.url = "github:numtide/devshell";
utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
};
# Function that produces an attribute set.
# Its function arguments are the flakes specified in inputs.
# The self argument denotes this flake.
outputs = inputs@{ self, nixpkgs, utils, ... }:
(utils.lib.eachSystem [ "x86_64-linux" ] (system:
let
pkgs = (import nixpkgs {
inherit system;
config = {
# For CUDA.
allowUnfree = true;
# Enables CUDA support in packages that support it.
cudaSupport = true;
};
});
in rec {
# Executed by `nix build .#<name>`
# Ignore this, it was just for testing.
packages = utils.lib.flattenTree {
hello = pkgs.hello;
};
# Executed by `nix build .`
defaultPackage = packages.hello;
# defaultPackage = pkgs.callPackage ./default.nix { };
# Executed by `nix develop`
devShell = with pkgs; mkShell {
buildInputs = [
python39 # numba-0.54.1 not supported for interpreter python3.10
] ++ (with python39.pkgs; [
inflect
librosa
pip
pytorch-bin
unidecode
]) ++ (with cudaPackages; [
cudatoolkit
]);
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
'';
};
}
));
}
Is there something pytorchWithCuda currently fails to do that you manage to accomplish with pytorch-bin ? | 14:04:00 |
SomeoneSerge (UTC+U[-12,12]) | ...and: hello there:) | 14:04:06 |
Yuu Yin | Someone S: hi there ^-^ i just wanted a minimal flake.nix with pytorch and cuda enabled. i did it with that flake
{
description = "Pytorch and stuff";
# Specifies other flakes that this flake depends on.
inputs = {
devshell.url = "github:numtide/devshell";
utils.url = "github:numtide/flake-utils";
nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
};
# Function that produces an attribute set.
# Its function arguments are the flakes specified in inputs.
# The self argument denotes this flake.
outputs = inputs@{ self, nixpkgs, utils, ... }:
(utils.lib.eachSystem [ "x86_64-linux" ] (system:
let
pkgs = (import nixpkgs {
inherit system;
config = {
# For CUDA.
allowUnfree = true;
# Enables CUDA support in packages that support it.
cudaSupport = true;
};
});
in rec {
# Executed by `nix build .#<name>`
# Ignore this, it was just for testing.
packages = utils.lib.flattenTree {
hello = pkgs.hello;
};
# Executed by `nix build .`
defaultPackage = packages.hello;
# defaultPackage = pkgs.callPackage ./default.nix { };
# Executed by `nix develop`
devShell = with pkgs; mkShell {
buildInputs = [
python39 # numba-0.54.1 not supported for interpreter python3.10
] ++ (with python39.pkgs; [
inflect
librosa
pip
pytorch-bin
unidecode
]) ++ (with cudaPackages; [
cudatoolkit
]);
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
'';
};
}
));
}
and direnv + nix-direnv .envrc
use flake
| 18:52:01 |
SomeoneSerge (UTC+U[-12,12]) | 👍️
I'll try to send some of the shells that I use a little later. But meanwhile a few links you might find useful, if you haven't seen them yet:
- zimbatm's
nixpkgs-unfree : https://discourse.nixos.org/t/announcing-nixpkgs-unfree/17505 The idea is to use inputs.nixpkgs.url = github:numtide/nixpkgs-unfree/$branch as input, which exposes nixpkgs with the unfree already enabled and tracks NixOS/nixpkgs automatically. I'm using smth like this for the flake registry in my configuration.nix , so that nixpkgs#python3Packages.pytorch resolves into the pre-cached unfree and cuda-enabled pytorch for me, instead of throwing an error about NIXPKGS_ALLOW_UNFREE=1
- It's somewhat experimental, but there's now cachix with prebuilt cuda-enabled sci-comp packages, you can find more at https://nixos.wiki/wiki/CUDA. The cache is populated automatically for the last release, unstable, and master, much like nixpkgs-unfree does it. There's a slight delay between a branch update and the cache, but probably not more than a day
- I prefer using source-based
pytorch instead of the wheel-based pytorch-bin . The way I use cuda-enabled packages is essentially just importing nixpkgs with { config = { allowUnfree = true; cudaSupport = true; }; } and using directly pytorch , jax , blender , etc. And I'd rather consume nixpkgs-unfree as a flake input, than import it myself in a project.
| 19:13:04 |
Yuu Yin | Someone S: that's definitively useful info! I'm going to implement your tips. thank you so much! by "And I'd rather consume nixpkgs-unfree as a flake input, than import it myself in a project.", you mean nixpkgs-unfree as an input to the project's repotory's flake.nix, right? | 21:19:16 |