| 16 Jan 2025 |
| nxtk joined the room. | 18:15:00 |
nxtk | Hi, to encrypt drives during nixos-anywhere deployment, one must provide a temporary passphrase key file and add it in disko to the LUKS partition as well. However, setting key file under boot.initrd.luks.devices."device".keyFile leads to incorrect initrd-crypttab entry, which prevents automatic TPM unlock down the line since key file isn't available anymore after installation, thus it seems like I need to remove key file entries after disko ran and partitions were mounted, but before the installation starts, do you have any ideas how to deal with that situation? | 18:16:25 |
lassulus | there is passwordFIle | 18:40:33 |
nxtk | epic, missed that | 18:44:53 |
nxtk | * epic, missed that 🙏 | 18:45:01 |
Sávio | https://github.com/nix-community/disko/pull/943 binfmt emulation | 21:34:23 |
Sávio | I tried using QEMU, but it was too slow without KVM | 21:35:10 |
| 17 Jan 2025 |
Sávio | Is there a way to run into the VM after finishing the build but before exiting? I would like to modify some files in the image | 00:47:42 |
Sávio | * Is there a way to run commands in the VM after finishing the build but before exiting? I would like to modify some files in the image | 00:47:54 |
Sávio | system.activationScripts seems to do the trick | 02:00:58 |
かえでちゃん | is it possible to have the swap inside a luks lvm and have working hibernation? | 14:14:54 |
かえでちゃん | * is it possible to have the swap inside a luks lvm and have working hibernation with disko? | 14:15:28 |
lassulus | yes | 14:16:04 |
lassulus | I think you need to configure some kernel parameters for that, but I could be mistaken | 14:16:27 |
| 18 Jan 2025 |
| lanice joined the room. | 23:03:28 |
lanice | Hi! First time using disko, started out with an example using nixos-anywhere with just one ext4 partition (besides boot), and it worked great!
My question: Is it possible to have just /nix mounted on my ext4 partition, and the rest of root (/, /var, etc.) mounted on a differend SSD with zfs?
| 23:05:59 |
lassulus | yeah, just point / on the other one and /nix on the one you want /nix to be, you just need to set the mountpoint | 23:23:15 |
lanice | Thanks! | 23:28:42 |
lanice | One other question: I've seen a few configs that used a post creat hook, something like postCreateHook = "zfs snapshot zroot/local/root@blank";, while other configs don't. What are the implications here. Is it just the inital snapshot that's missing, the "empty" one basically? Do I need that? | 23:29:38 |
| 19 Jan 2025 |
Enzime | people make the empty snapshot for impermanence setups where you wipe your rootfs on every boot | 02:38:54 |
lanice | Ah, right, I've seen that before. That makes sense, thank you! Not doing that for now. | 03:00:51 |
lanice | For my HDD's, I have a zpool "data" configured like this:
data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:21:46 |
lanice | * For my HDD's, I have a zpool "data" configured like this:
disko.devices.zpool.data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:22:36 |
lanice | * For my HDD's, I have a zpool "data" configured like this:
disko.devices.zpool.data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:24:58 |
lanice | * For my HDD's, I have a zpool "data" configured like this:
disko.devices.zpool.data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:25:06 |
lanice | * For my HDD's, I have a zpool "data" configured like this:
disko.devices.zpool.data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:25:19 |
lanice | * For my HDD's, I have a zpool "data" configured like this:
disko.devices.zpool.data = {
type = "zpool";
mode = {
topology = {
type = "topology";
vdev = [ { members = ["hdd24tb_0" "hdd24tb_1" "hdd24tb_2"]; mode = "raidz1"; } ];
};
};
rootFsOptions = {
atime = "off";
xattr = "sa";
compression = "zstd";
};
options.ashift = "12";
datasets = {
"data" = { type = "zfs_fs"; mountpoint = "/data"; };
"media" = { type = "zfs_fs"; mountpoint = "/data/media"; };
"backup" = { type = "zfs_fs"; mountpoint = "/data/backup"; };
};
};
After finishing the install with nixos-anywhere, when I do zfs list, the output for that pool is:
data 1.41M 43.5T 139K /data
data/backup 128K 43.5T 128K /data/backup
data/data 139K 43.5T 139K /data
data/media 128K 43.5T 128K /data/media
What confuses me is the double-use of mountpoint /data. It seems both data and data/data is mounted on /data. Why did that happen? I probably did something wrong in my disko config, I'm doing a lot of trial and error and looking at other configs. | 03:25:26 |
lanice | Could it be that I should not declare the first of those datasets at all, "data" with mountpoint "/data"? | 03:34:28 |
Enzime | it looks like the dataset shouldn't be mounted from what you posted, maybe it's something to do with ZFS topologies, I'm not very familiar with them (possibly a bug?) | 04:12:42 |
Enzime | the simplest fix would be to add canmount = "off" and mountpoint = "none"; to rootFsOptions | 04:13:47 |