!oNSIfazDqEcwhcOjSL:matrix.org

disko

356 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko89 Servers

Load older messages


SenderMessageTime
17 Nov 2024
@sigmasquadron:matrix.orgSigmaSquadronLUKS-encrypted bcachefs or bcachefs-encrypted bcachefs?17:08:48
@shift:c-base.orgshiftBcache encrypted. Luka plus bcachefs doesn't make sense to me :)17:09:15
@shift:c-base.orgshift* Bcachefs encrypted. Luks plus bcachefs doesn't make sense to me :)17:09:29
@shift:c-base.orgshift Maybe I need to make @lassulus:lassul.us provide me an example at the next NixOS Berlin Meetup ;P 17:15:55
@lassulus:lassul.uslassulushmm, maybe not implemented yet17:16:39
@shift:c-base.orgshift
In reply to @lassulus:lassul.us
hmm, maybe not implemented yet
I was told as the conference people had it working already. Got promised example configs but never received them, hehe.
17:18:27
@shift:c-base.orgshiftSo possibly.17:18:35
@kamillaova:matrix.orgKamilla 'ova
In reply to@magic_rb:matrix.redalder.org
did you figure this one out? im trying to do exactly what youre doing right now, i need to setup a bind mount /nix -> /persist/nix
https://yaso.su/Ur4CNRQ8
20:40:14
@kamillaova:matrix.orgKamilla 'ova* https://yaso.su/Ur4CNRQ8 https://yaso.su/bHoQgySf20:40:45
@magic_rb:matrix.redalder.orgmagic_rbthats also a valid way, i did it slightly differently20:40:56
18 Nov 2024
@kamillaova:matrix.orgKamilla 'ova
In reply to@magic_rb:matrix.redalder.org
thats also a valid way, i did it slightly differently
how?
07:16:42
@kamillaova:matrix.orgKamilla 'ova
In reply to@magic_rb:matrix.redalder.org
    nodev."/nix" = {
      fsType = "ext4";
      device = "/dev/disk/by-id/ata-SK_hynix_SC311_SATA_128GB_MJ88N52701150940J-part3";
      mountOptions = [
        "X-mount.subdir=nix"
      ];
      preMountHook = ''
        tmpdir=$(mktemp -d)

        mount /dev/disk/by-id/ata-SK_hynix_SC311_SATA_128GB_MJ88N52701150940J-part3 $tmpdir
        mkdir $tmpdir/nix
        umount $tmpdir
      '';
    };

jfc way too complex

ah I see
07:17:18
@kamillaova:matrix.orgKamilla 'ovahmmm07:18:19
@kamillaova:matrix.orgKamilla 'ovaimage.png
Download image.png
07:18:20
@matthewcroughan:defenestrate.itmatthewcroughan

Even today I still get

++ mkdir -p /mnt/nix/store
++ xargs cp --recursive --target /mnt/nix/store
cp: cannot access '/nix/store/jdrm9vk8z5lixyaalnl61wy2gjw64l3h-kmod-blacklist-31+20240202-2ubuntu8': Cannot allocate memory

on the latest disko

16:37:43
@matthewcroughan:defenestrate.itmatthewcroughanbut why 16:37:44
@matthewcroughan:defenestrate.itmatthewcroughanI have never been able to get to the bottom of it 16:44:48
@matthewcroughan:defenestrate.itmatthewcroughanI think it only happens when the toplevel is large 16:50:45
@matthewcroughan:defenestrate.itmatthewcroughan Ah yeah it's because imageSize is too small 16:53:19
@matthewcroughan:defenestrate.itmatthewcroughanbecause there is no automatic calculation because https://github.com/nix-community/disko/pull/465 wasn't good enough to get merged 16:54:45
@shift:c-base.orgshift @matthewcroughan:defenestrate.it: sounds you need to visit c-base on Tuesday to resolve it ;) 16:58:21
@matthewcroughan:defenestrate.itmatthewcroughanI'm not in Berlin sadly 17:03:00
@matthewcroughan:defenestrate.itmatthewcroughanOnly Birkenhead17:03:03
@matthewcroughan:defenestrate.itmatthewcroughanWhich is the new Berlin17:03:07
20 Nov 2024
@inayet:matrix.orgInayet removed their profile picture.00:59:16
@daddychan:matrix.orgdaddychan joined the room.05:02:06
@daddychan:matrix.orgdaddychan

Hey everyone. Felt bad about opening a github issue for this, so I thought I'd bring it here...

I'm trying to get a semi-complex system set up. I am trying to get an "impermanence" set up working with ZFS as my root filesystem. I've got a 512GB NVME drive for the OS and a 4TB drive for (NAS) bulk storage. I'm setting up a mirrored vdev for the OS (400GB partition on the OS drive and 400GB partition on the storage drive). The rest of the storage drive is just a normal vdev, and I have a pool for each vdev. I was actually able to boot into my system yesterday, and things seemed to be working as expected! However, I forgot encryption. When I went back to add LUKS to the storage pool's partition, I was able to set the passphrase and unlock it, but ultimately was unable to mount the dataset under the encrypted drive. I see Starting Import ZFS pool "flashpool" and a start job hangs for a while but eventually fails and I drop into emergency mode and get Cannot open access to console, the root account is locked.

I think this may have something to do with the timing of when it's trying to import the pool, but I'm not sure; if anyone has experience with LUKS+ZFS on disko, let me know!

05:13:10
@daddychan:matrix.orgdaddychan

Here's my disko.nix:

let
  root_fs_partition = {
    size = "400G"; # GiB
    content = {
      type = "zfs";
      pool = "rootpool";
    };
  };
  boot_partition =
    { mountpoint }:
    {
      size = "1G";
      type = "EF00";
      content = {
        inherit mountpoint;
        type = "filesystem";
        format = "vfat";
        mountOptions = [ "umask=0077" ];
      };
    };
  swap_partition = {
    size = "64G";
    content = {
      type = "swap";
      resumeDevice = false;
    };
  };
  zfs_rootfs_options = {
    # Enables access control lists
    acltype = "posixacl";
    # Disables tracking the time a file is accessed (viewed/ls'ed)
    atime = "off";
    # Supposedly a bit faster than zstd at the cost of slightly less
    # compression
    compression = "lz4";
    # We'll mount datasets rather than the pool itself
    mountpoint = "none";
    # Sets extended attributes in inode instead of with hidden sidecar
    # folders
    xattr = "sa";
  };
  zfs_options = {
    # 12 is a good default value... this pertains to the physical sector
    # size of the storage device in use. It's hard to find information about
    # this for the SSD I'm using, and a test I found showed that tweaking
    # this on an SSD didn't provide any performance boost, so I'm leaving it
    # at 12.
    ashift = "12";
  };
in
{
  disko.devices = {
    disk = {
      homelab = {
        type = "disk";
        device = "/dev/disk/by-id/nvme-WD_BLACK_SN770_500GB_22127H802862";
        content = {
          type = "gpt";
          partitions = {
            ESP = boot_partition { mountpoint = "/boot"; };
            swap = swap_partition;
            zfs = root_fs_partition;
          };
        };
      };
      flash0 = {
        type = "disk";
        device = "/dev/disk/by-id/nvme-CT4000P3PSSD8_2411E89FE7EE";
        content = {
          type = "gpt";
          partitions = {
            ESP = boot_partition { mountpoint = "/boot2"; };
            zfs-mirror = root_fs_partition;
            flash = {
              size = "100%";
              content = {
                type = "luks";
                name = "flashluks";
                passwordFile = "/tmp/secret.phrase";
                settings = {
                  # Enables TRIM; does have some security concerns, but they seem minor to me
                  allowDiscards = true;
                  keyFile = "/tmp/secret.key";
                  keyFileOffset = 618;
                  keyFileSize = 2022;
                  keyFileTimeout = 30;
                  # fallbackToPassword cannot be used when boot.initrd.systemd
                  # is in use since it is implied by that option
                };
                content = {
                  type = "zfs";
                  pool = "flashpool";
                };
              };
            };
          };
        };
      };
      # spin0 = {
      #   type = "disk";
      #   device = "FIXME";
      #   content = {
      #     type = "gpt";
      #     partitions = {
      #       zfs = {
      #         size = "100%";
      #         content = {
      #           type = "zfs";
      #           pool = "spinpool";
      #         };
      #       };
      #     };
      #   };
      # };
    };
    # https://wiki.archlinux.org/title/Install_Arch_Linux_on_ZFS
    # https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/
    zpool = {
      rootpool = {
        type = "zpool";
        mode = {
          topology = {
            type = "topology";
            vdev = [
              {
                mode = "mirror";
                # This needs to be an absolute path value most likely
                # https://github.com/nix-community/disko/blob/380847d94ff0fedee8b50ee4baddb162c06678df/lib/types/zpool.nix#L140
                members = [
                  "/dev/disk/by-partlabel/disk-homelab-zfs"
                  "/dev/disk/by-partlabel/disk-flash0-zfs-mirror"
                ];
              }
            ];
          };
        };
        # -O options for zpool create
        rootFsOptions = zfs_rootfs_options;
        # -o options for zpool create
        options = zfs_options;
        datasets = {
          # All datasets under drop are erased on reboot
          "drop" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "drop/root" = {
            type = "zfs_fs";
            mountpoint = "/";
            options."com.sun:auto-snapshot" = "false";
            postCreateHook = "zfs list -t snapshot -H -o name | grep -E '^rootpool/drop/root@blank$' || zfs snapshot rootpool/drop/root@blank";
          };
          "drop/nix" = {
            type = "zfs_fs";
            mountpoint = "/nix";
            options."com.sun:auto-snapshot" = "false";
          };
          # All datasets under keep are persisted on reboot
          "keep" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "keep/keep" = {
            type = "zfs_fs";
            mountpoint = "/keep";
            options."com.sun:auto-snapshot" = "true";
          };
          "keep/home" = {
            type = "zfs_fs";
            mountpoint = "/home";
            # Used by services.zfs.autoSnapshot options.
            options."com.sun:auto-snapshot" = "true";
          };
        };
      };
      flashpool = {
        type = "zpool";
        # -O options for zpool create
        rootFsOptions = zfs_rootfs_options;
        # -o options for zpool create
        options = zfs_options;
        datasets = {
          "flashroot" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "flashroot/flash" = {
            type = "zfs_fs";
            mountpoint = "/flash";
            options."com.sun:auto-snapshot" = "true";
          };
        };
      };
    };
  };
}
05:13:35
@daddychan:matrix.orgdaddychan Of note, I enabled boot.initrd.systemd in my config since it seemed like that was necessary to use keyFileTimeout, so that could also have something to do with this 05:14:57
@daddychan:matrix.orgdaddychan *

Hey everyone. Felt bad about opening a github issue for this, so I thought I'd bring it here...

I'm trying to get a semi-complex system set up. I am trying to get an "impermanence" set up working with ZFS as my root filesystem. I've got a 512GB NVME drive for the OS and a 4TB drive for (NAS) bulk storage. I'm setting up a mirrored vdev for the OS (400GB partition on the OS drive and 400GB partition on the storage drive). The rest of the storage drive is just a normal vdev, and I have a pool for each vdev. I was actually able to boot into my system yesterday, and things seemed to be working as expected! However, I forgot encryption. When I went back to add LUKS to the storage pool's partition, I was able to set the passphrase and unlock it, but ultimately was unable to mount the dataset under the encrypted drive. I see Starting Import ZFS pool "flashpool" and a start job hangs for a while but eventually fails and I drop into emergency mode and get Cannot open access to console, the root account is locked.

I think this may have something to do with the timing of when it's trying to import the pool, but I'm not sure. I also suspect using legacy mountpoints could potentially fix it? If anyone has experience with LUKS+ZFS on disko, let me know!

05:15:35

Show newer messages


Back to Room ListRoom Version: 10