!oNSIfazDqEcwhcOjSL:matrix.org

disko

360 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko93 Servers

Load older messages


SenderMessageTime
8 Aug 2025
@0x4a6f:nixos.dev0x4A6F joined the room.07:01:30
9 Aug 2025
@seapat:matrix.orgseapat set a profile picture.14:23:07
10 Aug 2025
@notahacker666:matrix.org@notahacker666:matrix.org joined the room.07:44:34
@notahacker666:matrix.org@notahacker666:matrix.org

Hi, I just noticed that with luks disko isn't checking if the passwords are the same and is using the first entered one, without notifying you that the entered passwords aren't matching. Maybe I've missed something, but is it a bug or a feature?

nix run github:nix-community/disko/latest -- --mode zap_create_mount disko.cnf

        luks = {
          size = "100%";
          content = {
            type = "luks";
            name = "crypted";
            settings = {
              allowDiscards = true;
            };
07:53:35
@lassulus:lassul.uslassulushmm, sounds like a bug08:16:21
@eason20000:matrix.orgEason20000 joined the room.10:15:14
@eason20000:matrix.orgEason20000Yeah, I opened an issue on Github (https://github.com/nix-community/disko/issues/1102), can anyone fix this?10:15:59
@matthewcroughan:defenestrate.itmatthewcroughan changed their display name from matthewcroughan to matthewcroughan @ WHY2025 (DECT: 8793).11:34:28
@eason20000:matrix.orgEason20000I found a way to solve this problem. But I do little programming.16:55:33
@eason20000:matrix.orgEason20000I posted it in the issue i had created before16:56:01
@eason20000:matrix.orgEason20000Can anyone check and fix it?16:56:28
@eason20000:matrix.orgEason20000Pull request created: https://github.com/nix-community/disko/pull/110317:29:44
@eason20000:matrix.orgEason20000I'm don't know much about programming and I'm still on LiveCD now. So is there anyone can help me to merge (if it's called so) this pull request?17:33:16
@tumble1999:matrix.orgTumbleCan I mount sftp and SMB in disko?21:36:08
@notahacker666:matrix.org@notahacker666:matrix.org Could be critical in some scenarios, lassulus should merge it asap 23:53:04
11 Aug 2025
@spaenny:tchncs.deSpaenny changed their display name from Philipp to Spaenny.14:47:01
@tumble1999:matrix.orgTumble Or can filesystems still be used 14:48:08
@ginkogruen:matrix.orgginkogruen
In reply to @tumble1999:matrix.org
Or can filesystems still be used
Yes you can still use that. Disko is more about the initial partitioning than everything relating to file systems especially concerning network shares.
16:59:58
@tumble1999:matrix.orgTumble
In reply to @ginkogruen:matrix.org
Yes you can still use that. Disko is more about the initial partitioning than everything relating to file systems especially concerning network shares.
How come it can do tmpfs though?
17:39:44
@ginkogruen:matrix.orgginkogruen
In reply to @tumble1999:matrix.org
How come it can do tmpfs though?
I don’t see the conflict with that? You asked if you can still use the fileSystems options. And you can, I do NFS shares through that myself.
19:03:36
13 Aug 2025
@pink_rocky:tchncs.derocky ((Ξ»πŸ’.πŸ’)πŸ’) (she/they; ask before DM please) changed their profile picture.00:11:25
@matthewcroughan:defenestrate.itmatthewcroughan changed their display name from matthewcroughan @ WHY2025 (DECT: 8793) to matthewcroughan.17:21:37
14 Aug 2025
@x10an14:matrix.orgx10an14

I want to leverage disko to create datasets for me post-install/initial set-up. I found this suggestion on discourse, and that works for those who've get all their disko config in 1x file (or 1x file that imports all/any other) per machine.
https://discourse.nixos.org/t/add-dataset-to-pool-after-creation-in-disko/62244/2

I don't have my disko config conveniently collated into a variable like that, so I'm wondering how I can achieve something like the below w/disko (the below gives error: stack overflow; max-call-depth exceeded error)?

{
  _file = ./auto-create.nix;
  flake.modules.nixos."nas-2024" =
    {
      config,
      pkgs,
      lib,
      ...
    }:
    let
      preStart =
        # bash
        ''${lib.getExe pkgs.disko} --mode format ${
          pkgs.writeText "disko.nix" (lib.generators.toPretty { } { inherit (config) disko; })
        }'';
    in
    {
      # Perform the "create new datasets/zvols" operation
      systemd.services."zfs-import-tank".preStart = preStart;
      # systemd.services."zfs-import-nvmepool".preStart = preStart; # Don't need this yet
    };
}

PS: I make heavy use of https://flake.parts/options/flake-parts-modules.html, which allow me to split configuration like shown above easily across files

17:17:19
@lassulus:lassul.uslassulusjust get config.system.build.format instead of running the cl17:22:09
@lassulus:lassul.uslassulus* just get config.system.build.format instead of running the cli17:22:13
@lassulus:lassul.uslassulusor formatScript17:24:07
@lassulus:lassul.uslassulus so something like systemd.services."zfs-import-tank".preStart = config.system.build.formatScript; 17:29:07
@x10an14:matrix.orgx10an14

Would any of you have any opinion on which of these two alternatives are better/safer/preferable?

{
  _file = ./auto-create.nix;
  flake.modules.nixos."nas-2024" =
    { config, lib, ... }:
    let
      # https://discourse.nixos.org/t/configure-zfs-filesystems-after-install/48633/2
      zfsDatasets =
        config.disko.devices.zpool
        |> lib.attrsToList
        |> lib.foldl' (
          acc: zpool:
          acc
          ++ (
            zpool.value.datasets
            |> lib.attrValues
            |> lib.filter (dataset: dataset.name != "__root")
            |> lib.map (dataset: {
              zpool = zpool.name;
              inherit (dataset) name mountpoint;
              creationScript = dataset._create;
            })
          )
        ) [ ];
      diskoFormatScript = lib.getExe config.system.build.formatScript;
    in
    {
      # Perform the "create new datasets/zvols" operation
      systemd.services = {
        "zfs-import-tank".preStart = diskoFormatScript;
        # "zfs-import-nvmepool".preStart = diskoFormatScript; # Don't need this yet
      }
      // (
        zfsDatasets
        |> lib.filter (dataset: dataset.name == "doesn't exist") # Disable for now
        |> lib.map (
          dataset:
          lib.nameValuePair "zfs-create-${dataset.zpool}-${lib.replaceString "/" "_" dataset.name}" {
            unitConfig.DefaultDependencies = false;
            requiredBy = [ "local-fs.target" ];
            before = [ "local-fs.target" ];
            after = [
              "zfs-import-${dataset.zpool}.service"
              "zfs-mount.service"
            ];
            unitConfig.ConditionPathIsMountPoint = lib.mkIf (
              dataset.mountpoint != null
            ) "!${dataset.mountpoint}";
            script = dataset.creationScript;
          }
        )
        |> lib.listToAttrs
      );
    };
}
21:01:21
@waltmck:matrix.orgwaltmck

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting dataset which no longer appear in yoru config, while keeping the ability for people to manually create unmanaged datasets

22:53:56
@waltmck:matrix.orgwaltmck *

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets

22:54:31

Show newer messages


Back to Room ListRoom Version: 10