disko | 360 Members | |
| disko - declarative disk partitioning - https://github.com/nix-community/disko | 93 Servers |
| Sender | Message | Time |
|---|---|---|
| 8 Aug 2025 | ||
| 07:01:30 | ||
| 9 Aug 2025 | ||
| 14:23:07 | ||
| 10 Aug 2025 | ||
| 07:44:34 | ||
| Hi, I just noticed that with luks disko isn't checking if the passwords are the same and is using the first entered one, without notifying you that the entered passwords aren't matching. Maybe I've missed something, but is it a bug or a feature? nix run github:nix-community/disko/latest -- --mode zap_create_mount disko.cnf
| 07:53:35 | |
| hmm, sounds like a bug | 08:16:21 | |
| 10:15:14 | ||
| Yeah, I opened an issue on Github (https://github.com/nix-community/disko/issues/1102), can anyone fix this? | 10:15:59 | |
| 11:34:28 | ||
| I found a way to solve this problem. But I do little programming. | 16:55:33 | |
| I posted it in the issue i had created before | 16:56:01 | |
| Can anyone check and fix it? | 16:56:28 | |
| Pull request created: https://github.com/nix-community/disko/pull/1103 | 17:29:44 | |
| I'm don't know much about programming and I'm still on LiveCD now. So is there anyone can help me to merge (if it's called so) this pull request? | 17:33:16 | |
| Can I mount sftp and SMB in disko? | 21:36:08 | |
| Could be critical in some scenarios, lassulus should merge it asap | 23:53:04 | |
| 11 Aug 2025 | ||
| 14:47:01 | ||
Or can filesystems still be used | 14:48:08 | |
In reply to @tumble1999:matrix.orgYes you can still use that. Disko is more about the initial partitioning than everything relating to file systems especially concerning network shares. | 16:59:58 | |
In reply to @ginkogruen:matrix.orgHow come it can do tmpfs though? | 17:39:44 | |
In reply to @tumble1999:matrix.orgI donβt see the conflict with that? You asked if you can still use the fileSystems options. And you can, I do NFS shares through that myself. | 19:03:36 | |
| 13 Aug 2025 | ||
| 00:11:25 | ||
| 17:21:37 | ||
| 14 Aug 2025 | ||
| I want to leverage disko to create datasets for me post-install/initial set-up. I found this suggestion on discourse, and that works for those who've get all their disko config in 1x file (or 1x file that imports all/any other) per machine. I don't have my disko config conveniently collated into a variable like that, so I'm wondering how I can achieve something like the below w/disko (the below gives
PS: I make heavy use of https://flake.parts/options/flake-parts-modules.html, which allow me to split configuration like shown above easily across files | 17:17:19 | |
| just get config.system.build.format instead of running the cl | 17:22:09 | |
| * just get config.system.build.format instead of running the cli | 17:22:13 | |
| or formatScript | 17:24:07 | |
so something like systemd.services."zfs-import-tank".preStart = config.system.build.formatScript; | 17:29:07 | |
| Would any of you have any opinion on which of these two alternatives are better/safer/preferable?
| 21:01:21 | |
| In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config. One idea is to have a boolean dataset attribute Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting dataset which no longer appear in yoru config, while keeping the ability for people to manually create unmanaged datasets | 22:53:56 | |
| * In my opinion Disko would benefit from natively supporting this kind of thing---I have run into this too, and when tweaking my dataset properties it is a pain to try to keep them in sync with my Disko config. One idea is to have a boolean dataset attribute Balancing idempotency with the desire to not lose data accidentally is tricky---if you delete a dataset from your config, should Disko delete the dataset from your pool? One idea is to use a custom ZFS user property to identify datasets which are being managed by Disko; this would allow deleting datasets which no longer appear in your config, while keeping the ability for people to manually create unmanaged datasets | 22:54:31 | |