!oNSIfazDqEcwhcOjSL:matrix.org

disko

380 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko97 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
14 Aug 2025
@lassulus:lassul.uslassulus so something like systemd.services."zfs-import-tank".preStart = config.system.build.formatScript; 17:29:07
@x10an14:matrix.orgx10an14

Would any of you have any opinion on which of these two alternatives are better/safer/preferable?

{
  _file = ./auto-create.nix;
  flake.modules.nixos."nas-2024" =
    { config, lib, ... }:
    let
      # https://discourse.nixos.org/t/configure-zfs-filesystems-after-install/48633/2
      zfsDatasets =
        config.disko.devices.zpool
        |> lib.attrsToList
        |> lib.foldl' (
          acc: zpool:
          acc
          ++ (
            zpool.value.datasets
            |> lib.attrValues
            |> lib.filter (dataset: dataset.name != "__root")
            |> lib.map (dataset: {
              zpool = zpool.name;
              inherit (dataset) name mountpoint;
              creationScript = dataset._create;
            })
          )
        ) [ ];
      diskoFormatScript = lib.getExe config.system.build.formatScript;
    in
    {
      # Perform the "create new datasets/zvols" operation
      systemd.services = {
        "zfs-import-tank".preStart = diskoFormatScript;
        # "zfs-import-nvmepool".preStart = diskoFormatScript; # Don't need this yet
      }
      // (
        zfsDatasets
        |> lib.filter (dataset: dataset.name == "doesn't exist") # Disable for now
        |> lib.map (
          dataset:
          lib.nameValuePair "zfs-create-${dataset.zpool}-${lib.replaceString "/" "_" dataset.name}" {
            unitConfig.DefaultDependencies = false;
            requiredBy = [ "local-fs.target" ];
            before = [ "local-fs.target" ];
            after = [
              "zfs-import-${dataset.zpool}.service"
              "zfs-mount.service"
            ];
            unitConfig.ConditionPathIsMountPoint = lib.mkIf (
              dataset.mountpoint != null
            ) "!${dataset.mountpoint}";
            script = dataset.creationScript;
          }
        )
        |> lib.listToAttrs
      );
    };
}
21:01:21
@waltmck:matrix.orgwaltmck

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting dataset which no longer appear in yoru config, while keeping the ability for people to manually create unmanaged datasets

22:53:56
@waltmck:matrix.orgwaltmck *

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets

22:54:31
@waltmck:matrix.orgwaltmck *

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.managed or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets

22:55:19
@waltmck:matrix.orgwaltmckWould a PR for this be welcome, or do you consider it out of scope?22:58:36
15 Aug 2025
@notahacker666:matrix.org@notahacker666:matrix.org They can't even merge a few lines of code to fix a critical bug for almost a week now, lol
Do you think that somebody cares about your stuff here?
02:44:18
@sandro:supersandro.deSandro 🐧 First of all this is pretty disrespectful.
Second: yeah, the set +x is probably shadowing the exit code.
Third: would I call this critical? Probably not but it is still a bug which should be fixed
10:31:42
@notahacker666:matrix.org@notahacker666:matrix.orgRespect must be deserved And I think that bricked hard drive because of a typo is critical enough, isn't it?11:39:08

Show newer messages


Back to Room ListRoom Version: 10