Nix: Cloud Native | 260 Members | |
| 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 23 Oct 2025 | ||
| 08:07:34 | ||
| 09:40:17 | ||
| Hey guys! I don't know if you've seen nix-csi yet. It's a CSI implementation that's intended to mount /nix into Kubernetes containers. It's still under active development but the paths I'm using work as-is already. (Supplying an expression in volumeAttributes and having nix-csi build it). I have written the code for specifying prebuilt storePaths as well but haven't started testing it yet π | 12:18:16 | |
| * Hey guys, I don't know if you've seen nix-csi yet. It's a CSI implementation that's intended to mount /nix into Kubernetes containers. It's still under active development but the paths I'm using work as-is already. (Supplying an expression in volumeAttributes and having nix-csi build it). I have written the code for specifying prebuilt storePaths as well but haven't started testing it yet π | 12:18:23 | |
| I'm also working on easykubenix, it's like kubenix, but easier (:P). It's also got features to convert imported resource lists into attribute sets (and back) so you can override items in lists as attrsets in the module system which is quite handy if you're using a chart or manifest blob that has something you wanna change π It generates JSON, YAML and a deployment script that uses "kluctl" to deploy with pruning to your active context if you don't wanna go through GitOps during development | 12:21:55 | |
| * Hey guys, I don't know if you've seen nix-csi yet. It's a CSI implementation that's intended to mount /nix into Kubernetes containers. It's under active development but the paths I'm using work as-is already. (Supplying an expression in volumeAttributes and having nix-csi build it). I have written the code for specifying prebuilt storePaths as well but haven't started testing it yet π | 12:22:15 | |
| Also: Would some moderator maybe add some keywords into the channel description? π #kubernetes #terraform are the ones I'd like discoverability for off the top of my head π | 12:23:12 | |
| * Also: Would moderator maybe add some keywords into the channel description? π #kubernetes #terraform are the ones I'd like discoverability for off the top of my head π | 12:23:18 | |
| Hi lillecarl ! That sounds pretty cool. I'll check this out. Regarding keywords: Here in the room seems no moderator available. It's probably more effective requesting that in https://matrix.to/#/#matrix-suggestions:nixos.org | 12:26:15 | |
| * Hey guys, I don't know if you've seen nix-csi yet. It's a CSI implementation that's intended to mount /nix into Kubernetes containers. It's under active development but the paths I'm using work as-is already. (Supplying an expression in volumeAttributes and having nix-csi build it). I have written the code for specifying prebuilt storePaths as well but haven't started testing it yet π Edit: The cool bits is that it uses one store per host (doesn't have) to be a NixOS node which is the USP over nix-snapshotter. And the mounted /nix stores are hardlinked views over the shared store so it's 0 storage overhead and page-cache sharing just like nix-snapshotter π | 12:27:21 | |
| FrΓ©dΓ©ric Christ: If you're curious to try out nix-csi I would happily hold your hand π The deployment docs are quite sparse still | 12:28:00 | |
| I implemented support for setting storepaths as volumeAttributes now, this will make nix-csi fetch from cache and do no building at all. Verbose(r) explanation here | 13:52:36 | |
| nix-csi seems quite interesting π | 21:46:54 | |
| 21:48:41 | ||
| https://github.com/Lillecarl/nix-csi/blob/main/python/nix_csi/runbuild.py#L9 love the default node name π | 21:49:43 | |
| Happy to hear, I'm excited AF2.0 to be honest π
I posted the first actual example of using easykubenix + nix-csi in the announcement thread now π | 23:59:52 | |
| 24 Oct 2025 | ||
Download shitbox IRL π€ͺ | 00:01:57 | |
| I've also got terragrunix in the early stages. Right now it's missing generating the TF lockfile in a drv (required since TF want's to write the lockfile temporarily in module dir but it's RO), it's going to be terragrunt + terranix essentially. Reminds me of the time when I came into a consulting job where they had 300 Terrraform states for one environment, someone somehow misunderstood terragrunt and split essentially every resource into it's own terragrunt unit. I quit that job, it was a loser society πΈ | 00:25:02 | |
| * I've also got terragrunix in the early stages. Right now it's missing generating the TF lockfile in a drv (required since TF wants to write the lockfile temporarily in module dir but it's RO), it's going to be terragrunt + terranix essentially. Reminds me of the time when I came into a consulting job where they had 300 Terrraform states for one environment, someone somehow misunderstood terragrunt and split essentially every resource into it's own terragrunt unit. I quit that job, it was a loser society πΈ | 00:36:51 | |
| I don't think Terragrunt is worth the effort when rendering config with Nix anyways, it really really tries to own tofu more than I'd like it to. What do you guys use to manage multiple states and data between them? terranix + some build system and remote_state? | 22:29:40 | |
| 27 Oct 2025 | ||
| 16:35:29 | ||
| 28 Oct 2025 | ||
| 08:30:03 | ||
| 31 Oct 2025 | ||
| 23:56:13 | ||
| 2 Nov 2025 | ||
| Still looking for Kubernetes users to try out nix-csi! π It's got a in-cluster cache (ssh-ng) now and you can reuse "builder nodes" as your own build cluster. The cache pod maintains a /etc/nix/machines config you can SCP onto your machine and with some ssh_config you get all builder labeled nodes accessible from nix CLI on your machine:
^ Pretty much like this, the list is always up2date on the cache (watching pod nix-csi-node pod events). Works with aarch64-linux and x86_64-linux so for cross-building it's pretty neat. Still investigating the proper way to trigger cache population within the cluster when doing remote builds | 17:03:49 | |
| * Still looking for Kubernetes users to try out nix-csi! π It's got a in-cluster cache (ssh-ng) now and you can reuse CSI pods as your own build cluster. The cache pod maintains a /etc/nix/machines config you can SCP onto your machine and with some ssh_config you get all builder labeled nodes accessible from nix CLI on your machine:
^ Pretty much like this, the list is always up2date on the cache (watching pod nix-csi-node pod events). Works with aarch64-linux and x86_64-linux so for cross-building it's pretty neat. Still investigating the proper way to trigger cache population within the cluster when doing remote builds | 17:04:42 | |
| 17:30:41 | ||
| @lillecarl:matrix.org: first time I have heard of nix-csi, definitively I will give a try. I found it really amazing! | 20:49:02 | |
| Erik: It's still ~quite beta~, but I'm happy to provide some hand-holding π | 20:49:58 | |
| * Erik: It's still ~quite beta~, but I'm happy to provide some hand-holding π The CSI bit works well, the cache bit works well if you hold it right-ish, hehe. There isn't an option to add your own caches and trust-keys currently so the beaten path is adding your pubkey and pushing to it, or providing expressions in the volumeAttributes | 20:51:34 | |
| And how a container is invoked with nix-csi? | 20:53:53 | |