!oNSIfazDqEcwhcOjSL:matrix.org

disko

404 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko110 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
6 Feb 2025
@waltmck:matrix.orgwaltmckyep that was it22:10:23
@waltmck:matrix.orgwaltmckok easy problem resolution lol22:10:30
7 Feb 2025
@ian:rainingmessages.devian joined the room.14:53:24
@brian:bmcgee.ie@brian:bmcgee.iein nixos vm tests is there a way to force the disk uuid to be the same/consistent?15:12:15
8 Feb 2025
@terrorjack:matrix.orgterrorjack joined the room.01:22:28
@lassulus:lassul.uslassulusyou can set the uuids now in the config, maybe that is sufficient?01:49:21
@terrorjack:matrix.orgterrorjack set a profile picture.02:24:27
@terrorjack:matrix.orgterrorjack removed their profile picture.02:25:02
@therajesq:matrix.orgZamyatin joined the room.23:52:02
9 Feb 2025
@projectinitiative:matrix.orgprojectinitiative joined the room.04:00:12
@projectinitiative:matrix.orgprojectinitiative

Hi,
I am trying to add a new feature for multi-drive bcachefs support. I am attempting to model it after zfs and mdadm/mdraid.
When I attempt to build the check nix build .\#checks.x86_64-linux.bcachefs-multi-disk.driverInteractive

I run into the following error:

       > In /nix/store/v86q00ycq0a1vlwa6fhx3hims0mxj0a7-disko-mount line 140:
       >     readarray -t pool_devices < <(cat "$disko_devices_dir"/bcachefs_pool1)
       >                                        ^----------------^ SC2154 (warning): disko_devices_dir is referenced but not assigned

I know this gets preset before hand (and injected?) in lib/default.nix's _create method. I believe I have added the necessary type to that file as well. Curious if someone had more insight on how this arch works/what vars are provided to types.

Here is my lib/default: https://github.com/ProjectInitiative/disko/blob/feat/bcachefs-as-member/lib/default.nix#L37

04:08:31
@projectinitiative:matrix.orgprojectinitiativeI think I can answer my own question: topLevel only includes this var for _create methods. I have it in a _mount. I will need to rework some of the logic05:09:32
@tired:fairydust.space@tired:fairydust.space left the room.22:51:59
10 Feb 2025
@snipped-paws:matrix.orgsnipped-paws joined the room.05:51:00
@snipped-paws:matrix.orgsnipped-pawsRedacted or Malformed Event05:56:41
@snipped-paws:matrix.orgsnipped-paws

Hello, I am new-ish to NixOS and disko and am trying to get a setup working for my home server. I have this cut-down config that is just supposed to format the single mirror that I plan to install NixOS on top of, but it seems to not correctly create the zfs partition (as well as the zpool and datasets). When I run sudo disko --mode disko disko-config.nix, it appears to work (?), but when I check the partitions in Gnome Disks in the live environment, the content of the partition is listed as "Unknown." The EFI boot partition does appear to be recognized as expected. Any help would be much appreciated!

{
  disko.devices = let
    nixosPartitions = {
      ESP = {
        size = "512M";
        type = "EF00";
        content.type = "filesystem";
        content.format = "vfat";
        content.mountpoint = "/boot";
      };
      zfs = {
        size = "100%";
        content.type = "zfs";
        content.pool = "nixos";
      };
    };
  in {
    disk = {
      # NixOS boot mirror (240GBx2)
      ssd0 = {
        device = "/dev/sda";
        type = "disk";
        content = {
          type = "gpt";
          partitions = nixosPartitions;
        };
      };
      ssd1 = {
        device = "/dev/sdb";
        type = "disk";
        content = {
          type = "gpt";
          partitions = nixosPartitions;
        };
      };
    };
  };

  zpool = {
    # NixOS boot mirror
    nixos = {
      type = "zpool";
      mode = "mirror";
      options = { ashift = "12"; };
      datasets = {
        "root" = {
          type = "zfs_fs";
          mountpoint = "/";
          options = {
            encryption = "aes-256-gcm";
            keyformat = "passphrase";
            keylocation = "prompt";
          };
        };
        "nix" = {
          type = "zfs_fs";
          mountpoint = "/nix";
          options."com.sun:auto-snapshot" = "false";
        };
        "home" = {
          type = "zfs_fs";
          mountpoint = "/home";
          options = {
            encryption = "aes-256-gcm";
            keyformat = "raw";
            keylocation = "file:///etc/zfskey";
          };
        };
      };
    };
  };
}
06:10:38
@lassulus:lassul.uslassulusGnome disks can't show zfs partitions07:16:56
@lassulus:lassul.uslassulusAfaik07:17:01
@snipped-paws:matrix.orgsnipped-paws

I've managed to make a little more progress:

{
  disko.devices = let
    nixosPartitions = {
      ESP = {
        size = "512M";
        type = "EF00";
        content.type = "filesystem";
        content.format = "vfat";
        content.mountpoint = "/boot";
      };
      zfs = {
        size = "100%";
        content.type = "zfs";
        content.pool = "zpool-nixos";
      };
    };
  in {
    disk = {
      # NixOS boot mirror (240GBx2)
      ssd0 = {
        device = "/dev/sda";
        type = "disk";
        content = {
          type = "gpt";
          partitions = nixosPartitions;
        };
      };
      ssd1 = {
        device = "/dev/sdb";
        type = "disk";
        content = {
          type = "gpt";
          partitions = nixosPartitions;
        };
      };
    };

    zpool = {
      # NixOS boot mirror
      zpool-nixos = {
        type = "zpool";
        mode = "mirror";
        options = { ashift = "12"; };
        preCreateHook = ''
          # Create the temporary keyfile before any ZFS dataset is created
          head -c 32 /dev/urandom > /run/tmp-zfskey
          chmod 600 /run/tmp-zfskey
        '';
        datasets = {
          "root" =
            { # NOTE: Don't encrypt root dataset directly to allow flexibility later.
              type = "zfs_fs";
              mountpoint = "/";
            };
          "root/unlock" = {
            type = "zfs_fs";
            mountpoint = "/unlock";
            options = {
              encryption = "aes-256-gcm";
              keyformat = "passphrase";
              keylocation = "prompt";
            };
          };
          "root/nix" = {
            type = "zfs_fs";
            mountpoint = "/nix";
            options = {
              encryption = "aes-256-gcm";
              keyformat = "raw";
              keylocation = "file:///run/tmp-zfskey";
            };
          };
          "root/home" = {
            type = "zfs_fs";
            mountpoint = "/home";
            options = {
              encryption = "aes-256-gcm";
              keyformat = "raw";
              keylocation = "file:///run/tmp-zfskey";
            };
          };
        };
        postCreateHook = ''
          # Move the keyfile into the unlock dataset
          # cp /run/tmp-zfskey /mnt/unlock/zfskey
          # chmod 600 /mnt/unlock/zfskey

          echo "postCreateHook running" >> /tmp/disko-debug.log
          ls -lah /mnt/unlock >> /tmp/disko-debug.log 2>&1
          echo "before" >> /tmp/disko-debug.log 2>&1
          cp /run/tmp-zfskey /mnt/unlock/zfskey >> /tmp/disko-debug.log 2>&1
          echo "after" >> /tmp/disko-debug.log 2>&1
          chmod 600 /mnt/unlock/zfskey >> /tmp/disko-debug.log 2>&1
          ls -lah /mnt/unlock >> /tmp/disko-debug.log 2>&1

          # Update keylocation for other datasets (will be correct after reboot)
          zfs set keylocation=file:///unlock/zfskey zpool-nixos/root/nix
          zfs set keylocation=file:///unlock/zfskey zpool-nixos/root/home
        '';
      };
    };
  };
}

This mostly runs, but the postCreateHook has an issue with copying the key file. Not sure if I am going about this the right way, so any pointers would be much appreciated. I basically want to have just one dataset encrypted with a password, and then have all other encrypted datasets reference a key file stored on the password protected one so that I can unlock everything with just one password.

11:02:03
@projectinitiative:matrix.orgprojectinitiativeDoes anyone have an explanation on how the disk type and dependencies get tracked? Similar to how for mdadm and mdraid, mdraid runs before mdadm as that is a prereq. I am trying to do something similar but running into issues13:31:25

Show newer messages


Back to Room ListRoom Version: 10