!fXpAvneDgyJuYMZSwO:nixos.org

Nix Data Science

194 Members
52 Servers

Load older messages


SenderMessageTime
21 Mar 2022
@skainswo:matrix.orgSamuel Ainsworth Someone S: Let's create our own channel just for the sake of having our own space. I don't want to spam data science folks with things that may not be directly relevant for them 22:29:30
@skainswo:matrix.orgSamuel AinsworthHow do we create a matrix room on the nixos.org domain?22:30:47
@jb:vk3.wtfjbedo ask in #matrix-suggestions:nixos.org 22:31:19
22 Mar 2022
@stites:matrix.orgstites joined the room.02:05:37
@ss:someonex.netSomeone S #cuda:nixos.org Samuel Ainsworth 13:29:31
29 Mar 2022
@FRidh:matrix.orgFRidh joined the room.08:32:32
3 Apr 2022
@lunik1:lunik.onelunik1 changed their profile picture.23:13:09
8 Apr 2022
@carlthome:matrix.orgcarlthome joined the room.15:12:59
@carlthome:matrix.orgcarlthomeHello hello! Anybody working with deep learning on audio data here? 🤗15:14:48
@skainswo:matrix.orgSamuel Ainsworth
In reply to @carlthome:matrix.org
Hello hello! Anybody working with deep learning on audio data here? 🤗
I'm not sure about audio data specifically, but I know that a few people are using NixOS for DL research
23:15:06
9 Apr 2022
@FRidh:matrix.orgFRidh
In reply to @carlthome:matrix.org
Hello hello! Anybody working with deep learning on audio data here? 🤗
Working with audio here and some ML but not DL. On the to do list :)
06:52:56
16 Apr 2022
@lihram:jnh.ems.hostHilmar (he/him) joined the room.23:12:40
17 Apr 2022
@yuu:matrix.orgyuu joined the room.04:23:53
@yuu:matrix.orgyuuanyone who has a flake for pytorch + cuda development? or it could be a shell.nix too06:30:27
@yuu:matrix.orgyuu
{
  description = "Pytorch and stuff";

  # Specifies other flakes that this flake depends on.
  inputs = {
    devshell.url = "github:numtide/devshell";
    utils.url = "github:numtide/flake-utils";
    nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
  };

  # Function that produces an attribute set.
  # Its function arguments are the flakes specified in inputs.
  # The self argument denotes this flake.
  outputs = inputs@{ self, nixpkgs, utils, ... }:
    (utils.lib.eachSystem [ "x86_64-linux" ] (system:
      let
        pkgs = (import nixpkgs {
          inherit system;
          config = {
            # For CUDA.
            allowUnfree = true;
            # Enables CUDA support in packages that support it.
            cudaSupport = true;
          };
        });
      in rec {
        # Executed by `nix build .#<name>`
        # Ignore this, it was just for testing.
        packages = utils.lib.flattenTree {
          hello = pkgs.hello;
        };

        # Executed by `nix build .`
        defaultPackage = packages.hello;
        # defaultPackage = pkgs.callPackage ./default.nix { };

        # Executed by `nix develop`
        devShell = with pkgs; mkShell {
          buildInputs = [
            python39 # numba-0.54.1 not supported for interpreter python3.10
          ] ++ (with python39.pkgs; [
            inflect
            librosa
            pip
            pytorch-bin
            unidecode
          ]) ++ (with cudaPackages; [
            cudatoolkit
          ]);

          shellHook = ''
            export CUDA_PATH=${pkgs.cudatoolkit}
          '';
        };
      }
    ));
}
14:52:09
@glepage:matrix.orgGaétan Lepage joined the room.22:00:17
@lepageg:ensimag.frGaétan Lepage left the room.22:02:06
18 Apr 2022
@ss:someonex.netSomeone S
In reply to @yuu:matrix.org
anyone who has a flake for pytorch + cuda development? or it could be a shell.nix too
Do you mean pytorch extensions?
14:03:22
@ss:someonex.netSomeone S
In reply to @yuu:matrix.org
{
  description = "Pytorch and stuff";

  # Specifies other flakes that this flake depends on.
  inputs = {
    devshell.url = "github:numtide/devshell";
    utils.url = "github:numtide/flake-utils";
    nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
  };

  # Function that produces an attribute set.
  # Its function arguments are the flakes specified in inputs.
  # The self argument denotes this flake.
  outputs = inputs@{ self, nixpkgs, utils, ... }:
    (utils.lib.eachSystem [ "x86_64-linux" ] (system:
      let
        pkgs = (import nixpkgs {
          inherit system;
          config = {
            # For CUDA.
            allowUnfree = true;
            # Enables CUDA support in packages that support it.
            cudaSupport = true;
          };
        });
      in rec {
        # Executed by `nix build .#<name>`
        # Ignore this, it was just for testing.
        packages = utils.lib.flattenTree {
          hello = pkgs.hello;
        };

        # Executed by `nix build .`
        defaultPackage = packages.hello;
        # defaultPackage = pkgs.callPackage ./default.nix { };

        # Executed by `nix develop`
        devShell = with pkgs; mkShell {
          buildInputs = [
            python39 # numba-0.54.1 not supported for interpreter python3.10
          ] ++ (with python39.pkgs; [
            inflect
            librosa
            pip
            pytorch-bin
            unidecode
          ]) ++ (with cudaPackages; [
            cudatoolkit
          ]);

          shellHook = ''
            export CUDA_PATH=${pkgs.cudatoolkit}
          '';
        };
      }
    ));
}
Is there something pytorchWithCuda currently fails to do that you manage to accomplish with pytorch-bin?
14:04:00
@ss:someonex.netSomeone S...and: hello there:)14:04:06
@yuu:matrix.orgyuu

Someone S: hi there ^-^ i just wanted a minimal flake.nix with pytorch and cuda enabled. i did it with that flake

{
  description = "Pytorch and stuff";

  # Specifies other flakes that this flake depends on.
  inputs = {
    devshell.url = "github:numtide/devshell";
    utils.url = "github:numtide/flake-utils";
    nixpkgs.url = "github:nixos/nixpkgs/nixos-21.11";
  };

  # Function that produces an attribute set.
  # Its function arguments are the flakes specified in inputs.
  # The self argument denotes this flake.
  outputs = inputs@{ self, nixpkgs, utils, ... }:
    (utils.lib.eachSystem [ "x86_64-linux" ] (system:
      let
        pkgs = (import nixpkgs {
          inherit system;
          config = {
            # For CUDA.
            allowUnfree = true;
            # Enables CUDA support in packages that support it.
            cudaSupport = true;
          };
        });
      in rec {
        # Executed by `nix build .#<name>`
        # Ignore this, it was just for testing.
        packages = utils.lib.flattenTree {
          hello = pkgs.hello;
        };

        # Executed by `nix build .`
        defaultPackage = packages.hello;
        # defaultPackage = pkgs.callPackage ./default.nix { };

        # Executed by `nix develop`
        devShell = with pkgs; mkShell {
          buildInputs = [
            python39 # numba-0.54.1 not supported for interpreter python3.10
          ] ++ (with python39.pkgs; [
            inflect
            librosa
            pip
            pytorch-bin
            unidecode
          ]) ++ (with cudaPackages; [
            cudatoolkit
          ]);

          shellHook = ''
            export CUDA_PATH=${pkgs.cudatoolkit}
          '';
        };
      }
    ));
}

and direnv + nix-direnv .envrc

use flake

18:52:01
@ss:someonex.netSomeone S

👍️

I'll try to send some of the shells that I use a little later.
But meanwhile a few links you might find useful, if you haven't seen them yet:

  • zimbatm's nixpkgs-unfree: https://discourse.nixos.org/t/announcing-nixpkgs-unfree/17505
    The idea is to use inputs.nixpkgs.url = github:numtide/nixpkgs-unfree/$branch as input, which exposes nixpkgs with the unfree already enabled and tracks NixOS/nixpkgs automatically. I'm using smth like this for the flake registry in my configuration.nix, so that nixpkgs#python3Packages.pytorch resolves into the pre-cached unfree and cuda-enabled pytorch for me, instead of throwing an error about NIXPKGS_ALLOW_UNFREE=1
  • It's somewhat experimental, but there's now cachix with prebuilt cuda-enabled sci-comp packages, you can find more at https://nixos.wiki/wiki/CUDA. The cache is populated automatically for the last release, unstable, and master, much like nixpkgs-unfree does it. There's a slight delay between a branch update and the cache, but probably not more than a day
  • I prefer using source-based pytorch instead of the wheel-based pytorch-bin. The way I use cuda-enabled packages is essentially just importing nixpkgs with { config = { allowUnfree = true; cudaSupport = true; }; } and using directly pytorch, jax, blender, etc. And I'd rather consume nixpkgs-unfree as a flake input, than import it myself in a project.
19:13:04
@yuu:matrix.orgyuu Someone S: that's definitively useful info! I'm going to implement your tips. thank you so much! by "And I'd rather consume nixpkgs-unfree as a flake input, than import it myself in a project.", you mean nixpkgs-unfree as an input to the project's repotory's flake.nix, right? 21:19:16
@ss:someonex.netSomeone SYes so!21:37:36
20 Apr 2022
@ahsmha:matrix.orgrh joined the room.23:33:06
21 Apr 2022
@wybpip:matrix.orgwybpip joined the room.00:49:27
@wybpip:matrix.orgwybpip left the room.00:49:28
24 Apr 2022
@ebeem:matrix.orgebeem-sama joined the room.19:50:58
26 Apr 2022
@johnwanyekz:matrix.orgKi joined the room.10:35:08
29 Apr 2022
@hedgemage:freehold.earthHedgeMage joined the room.17:17:27

There are no newer messages yet.


Back to Room ListRoom Version: 6