| 7 Jun 2025 |
| @deeok:matrix.org changed their display name from deeok to matrixrooms.info mod bot (does NOT read/send messages and/or invites; used for checking reported rooms). | 22:11:25 |
| @deeok:matrix.org left the room. | 23:16:28 |
| 8 Jun 2025 |
rocky ((λ💝.💝)💐) (she/they; ask before DM please) | No, I’m using the command I saw in the README:
sudo nix --experimental-features "nix-command flakes" run github:nix-community/disko/latest -- --mode destroy,format,mount ./etc/nixos/disk-config.nix, which I run inside /mnt with my configurations | 00:26:16 |
rocky ((λ💝.💝)💐) (she/they; ask before DM please) | I’ll work on that pastebin for you. | 01:37:59 |
@musicmatze:beyermatthi.as | Hello again. If I use disko to set up my ZFS datasets, but do not mount them during boot (because they are encrypted and the machine must boot without attendence), how can I get the path where ZFS would mount the dataset if I manually zfs mount -a (for example)?
Right now all I can see is (in nix repl): outputs.nixosConfigurations.myHost.config.disko.devices.zpool.zroot.datasets."myDataSet"._name - which does look like something internal I should not base my configuration on, should I? Because the dataset is not marked for automatic mounting, the attribute outputs.nixosConfigurations.myHost.config.disko.devices.zpool.zroot.datasets."myDataSet".mountpoint is null.
| 08:55:41 |
phaer | If you don't set a static mountpoint in your disko config, I don't think it's possible for the general case during eval time. I think if you leave it empty the default should just be "/${dataset.name}"?
During run time you could just do "zfs get mountpoint dataset" | 12:31:07 |
@musicmatze:beyermatthi.as | So from what I found during research is that I should use options.mountpoint = "legacy"; and then declare my fileSystems with the filesystems that should be mounted automatically, and for others I must find a better way. Like hard coding stuff maybe | 13:07:10 |
@musicmatze:beyermatthi.as | So... Can I just write my own systemd mount unit for datasets I do not want to mount automatically? And if yes, how can I tell systemd that the dataset encryption password has to be retrieved from the user? I suppose there's ways...
Because with a unit, I can make other units depend on it (like for starting navidrome only if the dataset with the music is mounted) | 13:13:34 |
| 9 Jun 2025 |
| Austin joined the room. | 03:28:26 |
| schuelermine joined the room. | 20:13:58 |
schuelermine | Is there a way to configure a swap file using Disko? | 20:14:09 |
schuelermine | specifically, a swap file on an ext4 in LUKS partition | 20:14:21 |
| ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86 changed their display name from Take the I-Whatever to Desolation to SS Bullshit Dreams. | 20:43:55 |
| Spaenny changed their display name from Spaenny to Philipp. | 20:46:24 |
| 10 Jun 2025 |
| caraiiwala joined the room. | 00:26:37 |
caraiiwala | I'm new to ZFS and trying to convert my RAID setup to Disko. The deployment with nixos-anywhere was successful, but once rebooted, the system failed to boot. Here is my disko configuration:
{
disko.devices = {
disk = {
root = {
type = "disk";
device = "/dev/disk/by-id/...";
content.type = "gpt";
content.partitions = {
ESP = {
type = "EF00";
size = "64M";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
};
};
zfs = {
size = "100%";
content.type = "zfs";
content.pool = "root";
};
};
};
raid-1 = {
type = "disk";
device = "/dev/disk/by-id/...";
content.type = "gpt";
content.partitions.zfs = {
size = "100%";
content.type = "zfs";
content.pool = "raid";
};
};
raid-2 = {
type = "disk";
device = "/dev/disk/by-id/...";
content.type = "gpt";
content.partitions.zfs = {
size = "100%";
content.type = "zfs";
content.pool = "raid";
};
};
raid-3 = {
type = "disk";
device = "/dev/disk/by-id/...";
content.type = "gpt";
content.partitions.zfs = {
size = "100%";
content.type = "zfs";
content.pool = "raid";
};
};
};
zpool = {
root = {
type = "zpool";
rootFsOptions.mountpoint = "none";
datasets = {
root.type = "zfs_fs";
root.mountpoint = "/";
home.type = "zfs_fs";
home.mountpoint = "/home";
};
};
raid = {
type = "zpool";
rootFsOptions.mountpoint = "none";
mode.topology.type = "topology";
mode.topology.vdev = [
{
mode = "raidz1";
members = ["raid-1" "raid-2" "raid-3"];
}
];
datasets.raid = {
type = "zfs_fs";
mountpoint = "/mnt/raid";
};
};
};
};
}
| 00:45:00 |
shelvacu | you need to either use dataset option mountpoint=legacy and configure mountpoints with usual fstab (which i recommend for root fs, "legacy" is a misnomer) or tell zfs to run an import in initrd | 01:20:28 |
shelvacu | https://search.nixos.org/options?channel=25.05&show=boot.zfs.extraPools&from=0&size=50&sort=relevance&type=packages&query=boot+zfs | 01:21:45 |
caraiiwala | Thanks for responding. I actually tried booting it with an even simpler non-ZFS config and I'm having the same issue:
disko.devices = {
disk = {
root = {
type = "disk";
device = "/dev/disk/by-id/scsi-36b82a720cf60ce002fd94d2e2991b17e";
content.type = "gpt";
content.partitions = {
ESP = {
type = "EF00";
size = "64M";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = ["umask=077"];
};
};
root = {
size = "100%";
content.type = "filesystem";
content.format = "ext4";
content.mountpoint = "/";
};
};
};
};
};
| 01:28:56 |
caraiiwala | So now I'm very confused | 01:29:13 |
caraiiwala | Well I've just tried an identical simple config on another system and it worked fine | 02:03:10 |
shelvacu | an easy check is to look at /etc/fstab in the final system and make sure it looks right | 02:03:16 |
shelvacu | In reply to @caraiiwala:beeper.com Well I've just tried an identical simple config on another system and it worked fine ah, then likely not a disko issue | 02:03:32 |
caraiiwala | So the main difference between these two systems is that the one this worked on has a hardware RAID controller that supports JBOD. The problematic one doesn't and so in an effort to try out ZFS I experimented with RAID0 passthrough. | 02:06:44 |
caraiiwala | Going to revert the RAID config and see if that fixes it | 02:07:28 |