| 28 Jul 2025 |
Zhaofeng Li | (launch k3s from k8s) | 16:13:19 |
magic_rb | In reply to @zhaofeng:zhaofeng.li interesting, what's your setup like? I might do something similar, but for the wrong reasons :p uh, i use systemd-nspawn with some convincing and i wrote a simple k3s module for NixNG | 16:17:21 |
magic_rb | https://git.sr.ht/~magic_rb/uk3s.nix/tree/master/item/nixos/modules these modules | 16:17:52 |
magic_rb | https://git.sr.ht/~magic_rb/uk3s.nix/tree/master/item/nixos/modules/uk3s.nix#L341 this specifically is what you need to run k3s in a nspawn container | 16:18:40 |
magic_rb | or itll complain | 16:18:44 |
magic_rb | the two env vars are reverse engineered from systemd source code and a lot of trial and error, im still using this setup but im hoping to migrate away, not from nixng+ucontainer but throw out the k3s | 16:19:26 |
Zhaofeng Li | ok, so k3s and flannel basically just work inside a network namespace, good to know | 16:23:35 |
Zhaofeng Li | going to attempt kind of the same thing but with cilium (probably not soon tbh) | 16:25:47 |
magic_rb | In reply to @zhaofeng:zhaofeng.li ok, so k3s and flannel basically just work inside a network namespace, good to know Im using istio, problem with cilium is that their own test suite is broken, and has been for months, when i tried it, so i couldnt know if it was my problem or their problem when i was debugging it | 16:27:42 |
magic_rb | So i gave up, went to istio | 16:27:48 |
magic_rb | But istio is insanely slow, envoy has huge overheads | 16:29:01 |
magic_rb | I can see envoy burning cpu time when im copying from my nix cache, so im throwing the whole thing out | 16:29:55 |
Zhaofeng Li | hmm, that doesn't sound too good... basically https://spot.rackspace.com provides cheap compute but their control plane is garbage, so I want to just shove a daemonset up there and run my k3s 🙃 | 16:31:24 |
magic_rb | Yeah i wouldnt, pain | 16:33:01 |
magic_rb | What you save on hardware cost youll spend double on your sanity because kubernetes and istio/cilium | 16:33:20 |
Zhaofeng Li | (with a bit of wireguard and bird magic maybe I can make them join my existing cluster, but yeah, extremely cursedness) | 16:33:29 |
magic_rb | I went into it enthusiastically trying to make it work, but the whole thing is rotten from the core. | 16:33:42 |
Zhaofeng Li | * (with a bit of wireguard and bird magic maybe I can make them join my existing cluster, but yeah, extreme cursedness) | 16:34:39 |
magic_rb | etcd fucking sucks, its slow as hell. The manifests gets huge very quickly, i can never remember all of the obscure options, networking is a mess, if you want to use the new gateway api youll end up reading issues trying to figure what is or isnt supported, eventually reading tests and source code like me. | 16:34:44 |
Zhaofeng Li | can't say much about the gateway api, but etcd basically just works for me though my setup is far from complex | 16:37:22 |
emily | it should almost certainly be a systemd daemon, I think | 17:31:58 |
emily | (it can be a oneshot or similar though) | 17:32:10 |
emily | we don't want to add new activation scripts unless it's completely unavoidable | 17:32:13 |
emily | (and moving stuff out of activation scripts is being worked on) | 17:32:36 |
emily | (this doesn't necessarily determine where in the options hierarchy to put it though) | 17:34:06 |
ElvishJerricco | It'll have to support systemd initrd if it's going to be upstreamed. The plan is to phase out scripted initrd over the next year or so, which means scripted-initrd-only things are not acceptable, though systemd-initrd-only things are acceptable. | 17:41:18 |
magic_rb | Should be fine for ifstate, i run it not in the initrd but on systemd initrd | 17:44:03 |
emily | ah I see it's already systemd.services.ifstate | 17:44:10 |
emily | so never mind me about activation scripts | 17:44:16 |
emily | I would personally probably go for services.ifstate.* IMO, it's comparable to services.network-manager.* in that you have a systemd service managing the config, but I'm ambivalent | 17:44:54 |