!oNSIfazDqEcwhcOjSL:matrix.org

disko

353 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko89 Servers

Load older messages


SenderMessageTime
14 Jan 2025
@dustee:matrix.orgdustee * when i connect to sshfs with nix filesystems.<>.fsType = "sshfs" the mountpoint freezes and doesnt work properly but doing it with a manual command with the same options works as expected. anyone experiencing anything similar? 16:40:00
15 Jan 2025
@usernamedelta89:matrix.orgusernamedelta89 joined the room.00:34:32
@py6m0e7i:matrix.orgpy6m0e7i joined the room.10:01:37
16 Jan 2025
@nxtk:matrix.orgnxtk joined the room.18:15:00
@nxtk:matrix.orgnxtk Hi, to encrypt drives during nixos-anywhere deployment, one must provide a temporary passphrase key file and add it in disko to the LUKS partition as well.
However, setting key file under boot.initrd.luks.devices."device".keyFile leads to incorrect initrd-crypttab entry, which prevents automatic TPM unlock down the line since key file isn't available anymore after installation, thus it seems like I need to remove key file entries after disko ran and partitions were mounted, but before the installation starts, do you have any ideas how to deal with that situation?
18:16:25
@lassulus:lassul.uslassulusthere is passwordFIle18:40:33
@nxtk:matrix.orgnxtkepic, missed that18:44:53
@nxtk:matrix.orgnxtk* epic, missed that 🙏18:45:01
@zvzg:matrix.orgSáviohttps://github.com/nix-community/disko/pull/943 binfmt emulation21:34:23
@zvzg:matrix.orgSávioI tried using QEMU, but it was too slow without KVM21:35:10
17 Jan 2025
@zvzg:matrix.orgSávioIs there a way to run into the VM after finishing the build but before exiting? I would like to modify some files in the image00:47:42
@zvzg:matrix.orgSávio* Is there a way to run commands in the VM after finishing the build but before exiting? I would like to modify some files in the image00:47:54
@zvzg:matrix.orgSáviosystem.activationScripts seems to do the trick02:00:58
@rts-kaede:thewired.chかえでちゃんis it possible to have the swap inside a luks lvm and have working hibernation?14:14:54
@rts-kaede:thewired.chかえでちゃん* is it possible to have the swap inside a luks lvm and have working hibernation with disko?14:15:28
@lassulus:lassul.uslassulusyes14:16:04
@lassulus:lassul.uslassulusI think you need to configure some kernel parameters for that, but I could be mistaken14:16:27
18 Jan 2025
@lanice:matrix.orglanice joined the room.23:03:28
@lanice:matrix.orglanice

Hi! First time using disko, started out with an example using nixos-anywhere with just one ext4 partition (besides boot), and it worked great!

My question: Is it possible to have just /nix mounted on my ext4 partition, and the rest of root (/, /var, etc.) mounted on a differend SSD with zfs?

23:05:59
@lassulus:lassul.uslassulusyeah, just point / on the other one and /nix on the one you want /nix to be, you just need to set the mountpoint23:23:15
@lanice:matrix.orglaniceThanks!23:28:42
@lanice:matrix.orglanice One other question: I've seen a few configs that used a post creat hook, something like postCreateHook = "zfs snapshot zroot/local/root@blank";, while other configs don't. What are the implications here. Is it just the inital snapshot that's missing, the "empty" one basically? Do I need that? 23:29:38
19 Jan 2025
@enzime:nixos.devEnzime people make the empty snapshot for impermanence setups where you wipe your rootfs on every boot 02:38:54
@lanice:matrix.orglaniceAh, right, I've seen that before. That makes sense, thank you! Not doing that for now.03:00:51
@lanice:matrix.orglanice

For my HDD's, I have a zpool "data" configured like this:

data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data                1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data       139K   43.5T   139K  /data
data/media    128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:21:46
@lanice:matrix.orglanice *

For my HDD's, I have a zpool "data" configured like this:

disko.devices.zpool.data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data                1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data       139K   43.5T   139K  /data
data/media    128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:22:36
@lanice:matrix.orglanice *

For my HDD's, I have a zpool "data" configured like this:

disko.devices.zpool.data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data                1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data  139K   43.5T   139K  /data
data/media    128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:24:58
@lanice:matrix.orglanice *

For my HDD's, I have a zpool "data" configured like this:

disko.devices.zpool.data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data                1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data    139K   43.5T   139K  /data
data/media    128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:25:06
@lanice:matrix.orglanice *

For my HDD's, I have a zpool "data" configured like this:

disko.devices.zpool.data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data          1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data    139K   43.5T   139K  /data
data/media   128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:25:19
@lanice:matrix.orglanice *

For my HDD's, I have a zpool "data" configured like this:

disko.devices.zpool.data = {
  type = "zpool";
  mode = {
    topology = {
      type = "topology";
      vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
    };
  };
  rootFsOptions = {
    atime = "off";
    xattr = "sa";
    compression = "zstd";
  };
  options.ashift = "12";
  datasets = {
    "data" = { type = "zfs_fs"; mountpoint = "/data"; };
    "media" = { type = "zfs_fs";  mountpoint = "/data/media"; };
    "backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
  };
};

After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:

data         1.41M  43.5T   139K  /data
data/backup  128K   43.5T   128K  /data/backup
data/data    139K   43.5T   139K  /data
data/media   128K   43.5T   128K  /data/media

What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs.

03:25:26

Show newer messages


Back to Room ListRoom Version: 10