!oNSIfazDqEcwhcOjSL:matrix.org

disko

356 Members
disko - declarative disk partitioning - https://github.com/nix-community/disko89 Servers

Load older messages


SenderMessageTime
10 Aug 2025
@eason20000:matrix.orgEason20000Can anyone check and fix it?16:56:28
@eason20000:matrix.orgEason20000Pull request created: https://github.com/nix-community/disko/pull/110317:29:44
@eason20000:matrix.orgEason20000I'm don't know much about programming and I'm still on LiveCD now. So is there anyone can help me to merge (if it's called so) this pull request?17:33:16
@tumble1999:matrix.orgTumbleCan I mount sftp and SMB in disko?21:36:08
@notahacker666:matrix.org@notahacker666:matrix.org Could be critical in some scenarios, lassulus should merge it asap 23:53:04
11 Aug 2025
@spaenny:tchncs.deSpaenny changed their display name from Philipp to Spaenny.14:47:01
@tumble1999:matrix.orgTumble Or can filesystems still be used 14:48:08
@ginkogruen:matrix.orgginkogruen
In reply to @tumble1999:matrix.org
Or can filesystems still be used
Yes you can still use that. Disko is more about the initial partitioning than everything relating to file systems especially concerning network shares.
16:59:58
@tumble1999:matrix.orgTumble
In reply to @ginkogruen:matrix.org
Yes you can still use that. Disko is more about the initial partitioning than everything relating to file systems especially concerning network shares.
How come it can do tmpfs though?
17:39:44
@ginkogruen:matrix.orgginkogruen
In reply to @tumble1999:matrix.org
How come it can do tmpfs though?
I don’t see the conflict with that? You asked if you can still use the fileSystems options. And you can, I do NFS shares through that myself.
19:03:36
13 Aug 2025
@pink_rocky:tchncs.derocky ((Ξ»πŸ’.πŸ’)πŸ’) (she/they; ask before DM please) changed their profile picture.00:11:25
@matthewcroughan:defenestrate.itmatthewcroughan changed their display name from matthewcroughan @ WHY2025 (DECT: 8793) to matthewcroughan.17:21:37
14 Aug 2025
@x10an14:matrix.orgx10an14

I want to leverage disko to create datasets for me post-install/initial set-up. I found this suggestion on discourse, and that works for those who've get all their disko config in 1x file (or 1x file that imports all/any other) per machine.
https://discourse.nixos.org/t/add-dataset-to-pool-after-creation-in-disko/62244/2

I don't have my disko config conveniently collated into a variable like that, so I'm wondering how I can achieve something like the below w/disko (the below gives error: stack overflow; max-call-depth exceeded error)?

{
  _file = ./auto-create.nix;
  flake.modules.nixos."nas-2024" =
    {
      config,
      pkgs,
      lib,
      ...
    }:
    let
      preStart =
        # bash
        ''${lib.getExe pkgs.disko} --mode format ${
          pkgs.writeText "disko.nix" (lib.generators.toPretty { } { inherit (config) disko; })
        }'';
    in
    {
      # Perform the "create new datasets/zvols" operation
      systemd.services."zfs-import-tank".preStart = preStart;
      # systemd.services."zfs-import-nvmepool".preStart = preStart; # Don't need this yet
    };
}

PS: I make heavy use of https://flake.parts/options/flake-parts-modules.html, which allow me to split configuration like shown above easily across files

17:17:19
@lassulus:lassul.uslassulusjust get config.system.build.format instead of running the cl17:22:09
@lassulus:lassul.uslassulus* just get config.system.build.format instead of running the cli17:22:13
@lassulus:lassul.uslassulusor formatScript17:24:07
@lassulus:lassul.uslassulus so something like systemd.services."zfs-import-tank".preStart = config.system.build.formatScript; 17:29:07
@x10an14:matrix.orgx10an14

Would any of you have any opinion on which of these two alternatives are better/safer/preferable?

{
  _file = ./auto-create.nix;
  flake.modules.nixos."nas-2024" =
    { config, lib, ... }:
    let
      # https://discourse.nixos.org/t/configure-zfs-filesystems-after-install/48633/2
      zfsDatasets =
        config.disko.devices.zpool
        |> lib.attrsToList
        |> lib.foldl' (
          acc: zpool:
          acc
          ++ (
            zpool.value.datasets
            |> lib.attrValues
            |> lib.filter (dataset: dataset.name != "__root")
            |> lib.map (dataset: {
              zpool = zpool.name;
              inherit (dataset) name mountpoint;
              creationScript = dataset._create;
            })
          )
        ) [ ];
      diskoFormatScript = lib.getExe config.system.build.formatScript;
    in
    {
      # Perform the "create new datasets/zvols" operation
      systemd.services = {
        "zfs-import-tank".preStart = diskoFormatScript;
        # "zfs-import-nvmepool".preStart = diskoFormatScript; # Don't need this yet
      }
      // (
        zfsDatasets
        |> lib.filter (dataset: dataset.name == "doesn't exist") # Disable for now
        |> lib.map (
          dataset:
          lib.nameValuePair "zfs-create-${dataset.zpool}-${lib.replaceString "/" "_" dataset.name}" {
            unitConfig.DefaultDependencies = false;
            requiredBy = [ "local-fs.target" ];
            before = [ "local-fs.target" ];
            after = [
              "zfs-import-${dataset.zpool}.service"
              "zfs-mount.service"
            ];
            unitConfig.ConditionPathIsMountPoint = lib.mkIf (
              dataset.mountpoint != null
            ) "!${dataset.mountpoint}";
            script = dataset.creationScript;
          }
        )
        |> lib.listToAttrs
      );
    };
}
21:01:21
@waltmck:matrix.orgwaltmck

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting dataset which no longer appear in yoru config, while keeping the ability for people to manually create unmanaged datasets

22:53:56
@waltmck:matrix.orgwaltmck *

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.followConfig or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets

22:54:31
@waltmck:matrix.orgwaltmck *

In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config.

One idea is to have a boolean dataset attribute datasets.<name>.managed or similar. If this is set, then on activation (and boot) Disko will try to set the dataset properties to match the config.

Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets

22:55:19
@waltmck:matrix.orgwaltmckWould a PR for this be welcome, or do you consider it out of scope?22:58:36
15 Aug 2025
@notahacker666:matrix.org@notahacker666:matrix.org They can't even merge a few lines of code to fix a critical bug for almost a week now, lol
Do you think that somebody cares about your stuff here?
02:44:18
@sandro:supersandro.deSandro 🐧 First of all this is pretty disrespectful.
Second: yeah, the set +x is probably shadowing the exit code.
Third: would I call this critical? Probably not but it is still a bug which should be fixed
10:31:42
@notahacker666:matrix.org@notahacker666:matrix.orgRespect must be deserved And I think that bricked hard drive because of a typo is critical enough, isn't it?11:39:08
@magic_rb:matrix.redalder.orgmagic_rbThe disko people deserve your respect just for writing this and making it free software. Feel free to fork or start from scratch if you don't like how the project is run. But don't attack people11:41:15
@magic_rb:matrix.redalder.orgmagic_rbI do get youre angry, you lost a disk, but you wont get the fix merged by being an ass. (As someone who still struggles with anger, i get ya)11:47:29
@notahacker666:matrix.org@notahacker666:matrix.orgLuckily, I'm not so short-sighted as to rely on the amateur software But here's a fact: a random guy had fixed this bug for a maintainer almost a week ago, and his commit wasn't even merged When the free and open source software can brick your important data or make bad guys steel it, AND the dev doesn't care at all then it's not good and useful software anymore Are you guys also fixing RCEs like that?12:00:32
@lassulus:lassul.uslassulusRCEs will usually be fixed faster, but if you rely on things it's always good to have your own fork to rely on12:01:59
@lassulus:lassul.uslassulusluckily this is quite easy with nix and OSS software12:02:20

Show newer messages


Back to Room ListRoom Version: 10