Sender | Message | Time |
---|---|---|
23 Aug 2025 | ||
For home/personal use --impure should be the default IMO. You can use builtins.getEnv to read environment variables and you can read files anywhere on the filesystem, not just the sacred repo. (Non-flake users don't have this problem since everything is --impure by default :)) | 14:34:16 | |
I AI'd if --impure can be enabled by default and just now stumbled on a feature I hadn't heard about before! The option allow-unsafe-native-code-during-evaluation gives you a new builtin function called builtins.exec which can be used like this:
(It parses output as a Nix expression so i had to wrap the json in '' '' so it's treated as a string by Nix) I'm definitely gonna dive deeper into usecases for this, as everything Nix it'll stall evaluation so don't do extremely slow things :) | 14:49:37 | |
In reply to @lillecarl:matrix.orgIn the pre-flake world (where most of my code lives), impure is the default. I use it to provide license keys, cpu optimization flags via env or configuration files. | 15:29:34 | |
24 Aug 2025 | ||
This is the way, I use flake-compat(lix) 😄 | 22:55:56 | |
25 Aug 2025 | ||
Quick reminder: we meet this Wednesday at 18:00 at the Nordic Light hotel ((Vasagatan 11, Stockholm)). | 19:06:57 | |
27 Aug 2025 | ||
Quick reminder: Today is this Wednesday! 😄 | 12:10:27 | |
* Quick reminder: Today is this Wednesday! 😄 Edit: I'm feeling some pressure having my Kubernetes thing in a demoable state | 12:11:10 | |
Newcomer to the meetup here. Where in the hotel are we meeting? | 15:41:05 | |
$somewhere in the lobby, look for nerds 😄 | 15:41:26 | |
I will do my best! If anyone arrives early then feel free to come say hello to me. I am the guy with long hair and (not so long) beard sitting with my computer at a round table in the bar section. | 15:43:47 | |
Thanks for today, @claesatwork:matrix.org: looking forward to see your "simple secrets" solution! 😁 If anyone is curious for how the cknix store copy works it's here: https://github.com/Lillecarl/cknix/blob/main/createstore.py Plumbum shall be replaced with native subprocess (so I can do async properly) and all but hardlinks will be removed since it'll use bind mount for RO and overlayfs for RW mounts :) | 19:26:03 | |
If anyone is running a non-critical Kubernetes and would like to try it out, please shout-out to give me motivation to actually get it out of WIP :p | 19:27:06 | |
(and most subprocess calls could probably be native python instead now that we only need hardlinks, but "cp" is pretty good) | 19:28:19 | |
thanks, I will try to show you next time! | 21:09:07 | |
21:09:24 | ||
21:53:53 | ||
28 Aug 2025 | ||
In reply to @claesatwork:matrix.orgNice! Remembered https://search.nixos.org/options?channel=unstable&query=zramSwap ? :) | 11:41:18 | |
In reply to @lillecarl:matrix.orgI will definitely enable it on at least one of my NixOS hosts. Seems too good to be true :-) | 13:08:27 | |
it truly is like https://web.archive.org/web/20090826232515/http://www.downloadmoreram.com/ :) | 13:12:56 | |
In reply to @claesatwork:matrix.org Virtual memory is dark magic only special kernel wizards with superpowers can comprehend. All I know is those wizards want you to have a good amount of swap space so they can perform their spells without constraints. (Honestly actually really: Swap is essential even if you have boatloads of RAM since it uses LRU to keep hot shit in fast memory and yesteryears Zara collection on the bench, everyone who says you don't need swap haven't spoken to the superpowered wizards) | 13:21:36 | |
That disk swap is good, that I think is obvious. It is just an addition to RAM. But that taking part of the RAM and compressing data into it, is less obvious. But I will definitely try | 17:29:01 | |
https://asciinema.org/a/I6RXVGmCyCcHnXInvh5DKph4C read-write nix store in kubernetes with overlayfs 😄 yayy | 17:49:38 | |
20:39:24 | ||
29 Aug 2025 | ||
In reply to @lillecarl:matrix.orgIt would be interesting to see the data you pipe into kubectl apply in the first command, it seems to be the magic sauce :-) | 08:17:14 | |
In reply to @claesatwork:matrix.org https://gist.github.com/Lillecarl/cbd2e037f1fba7c145a37c3f33d5f926 Essentially everything except the daemonset,csidriver and the ubuntu pod is reduntant(in the current state). Currently the volume is just "ephemeral" so the expression metadata is straight in the podspec under volumeAttributes rather than through the expression CRD. The reason for having a CRD to stick nix expressions into is so that they can be built ahead of pod-creation time, but I've reprioritized to just make sure it works reliably building on pod creation. It'll still be "kewl enough" to hopefully attract more eyes and ideas 😄 | 08:54:03 | |
Another thing to build is a minimal activation script (sigh we don't get away from those hey) that copies cacerts, users, groups, nss and such from /nix/store into their "well-known paths" so #most applications Just work ™️ 😄 | 08:57:55 | |
Claes: The entire CSI driver is in this file: https://github.com/Lillecarl/cknix/blob/main/cknix_csi/cknix.py I've cleaned it up now and honestly it's so dumb and simple now. The "coolest" part is probably the execline script (Don't wanna ship a full shell) that pipes nix-store --dump-db to nix-store --load-db 😸 The nix code quality is still garbage so don't judge too hard | 09:38:37 | |
Cool! In the example you exec into the pod to show that the nix store is present. Do you have a plan for how it will work for a pod to execute the right process at start? I assume all pods running in a cluster will all have the same nix store mounted so the loose end here is to distinguish which pods run what? | 09:45:21 | |
In reply to @claesatwork:matrix.org Nix makes the "top derivation" available under /nix/var/result/bin/binaryname in the container, which can be set as the command in podspec to execute straight from the volume. Volumes are mounted before the container is created so it works out (You should use "tini" as the entrypoint which then executes your code to reap zombie processes, same thing with any container really). How the stores work: There's a daemonset (run once per node) that is this CSI driver, it mounts HOST/var/lib/cknix/nix to /nix2 in the daemonset initcontainer, copies /nix to /nix2. Then the actual CSI container runs and mounts HOST/var/lib/cknix/nix to /nix. This store is is the "source store" for all pods on that NODE. But to create an isolated "store view" per pod so they can't see other pods storepaths we create "fake stores" using hardlinks only containing the paths from nix path-info into the "fake store" the "fake store" is then either bind mounted (shares page cache and is amazing but readonly) into the pod, or overlayfs mounted (means you can run nix commands within the workload pod and RW the entire Nix store :) RW mounting should only be in development, but it's cool because it allows you to edit any storepaths (since the workload pods aren't privileged we can't setup a nix-daemon and do the RO bind mount thing on /nix/store so the entire store is read-write so you can do shenanigans you've never before imagined without strangling your dog | 09:54:24 | |
The part where we hardlink from the source store to a new "fake store" is where it's unique, just sharing a global /nix with all pods on the host would work, but definitely not for "production" since any application in the cluster could read any other application in the clusters code by travesting the store. | 09:55:42 |