| 15 Jul 2021 |
Mic92 | In reply to @citadelcore:nixos.dev I'd like to replace the GRE/WireGuard tunnels with something like Tinc in the future, since WG appears to be causing an obscure kernel bug with Bird that's very annoying I already have built all my VPN stuff based on tinc. It has nice semantics unfortunally it's performance is quite bad. | 15:39:34 |
Alex Zero | Ah, that kinda sucks :/ | 15:39:59 |
Mic92 | There were plans in the tinc community to use wireguard as the lower layer... never happend so | 15:40:46 |
Mic92 | you maybe want to have a look at https://github.com/slackhq/nebula | 15:41:05 |
Mic92 | I never checked it's performance so. | 15:41:15 |
Mic92 | Than there is tailscale https://tailscale.com/ | 15:41:31 |
Leon | In reply to @citadelcore:nixos.dev I'd like to replace the GRE/WireGuard tunnels with something like Tinc in the future, since WG appears to be causing an obscure kernel bug with Bird that's very annoying What are symptoms of these bugs? Works fine for me so far… | 15:42:31 |
Alex Zero | The kernel essentially fails to report that routes exist in the FIB, so BIRD ends up inserting duplicates | 15:44:06 |
Alex Zero | Drives the CPU usage to 100% and eventually crashes the process | 15:44:22 |
Alex Zero | I've submitted a kernel bug, but nobody ever replied to it | 15:44:35 |
Alex Zero | https://lkml.org/lkml/2020/6/11/720 | 15:45:25 |
Leon | Hm, very interesting. | 15:45:31 |
Alex Zero | Only thing that works is downgrading the kernel to 5.2,which is not ideal at all | 15:45:54 |
Alex Zero | I've had to force it on all the router VMs | 15:46:03 |
Alex Zero | For lack of a better solution | 15:46:09 |
Mic92 | Alex Zero: Did you cc'ed the original authors? | 15:56:53 |
Mic92 | also the merge and post it on netdev | 15:57:26 |
Mic92 | Also include tcpdump dumps from netlink | 15:58:06 |
Alex Zero | I'll do that, thanks 👍 | 16:12:22 |
Mic92 | *merger | 16:14:48 |
Amanda (she/her) | So, somehow my co-admin is able to assign IPs to his proxmox VMs using his router's networking stack. He said something about bridging the interface or similar -- is this something I could set up myself to do with lxd/nixos containers? I'm not very versed in networking stuff, so any clarification is appreciated | 17:02:58 |
Amanda (she/her) | ( He doesn't seem to understand how it works either, I tried asking ) | 17:03:26 |
Linux Hackerman | Amanda (she/her): yep, if you create a bridge and put both your physical interface and one side of a veth pair on that bridge, the containers will be on the network as if they were additional physical machines attached via a switch. | 17:08:43 |
Amanda (she/her) | Huh, It's just putting it on the bridge? For some reason I thought bridges were what was used for the host-local stuff | 17:09:37 |
Linux Hackerman | Not sure how to do that with lxd, but you can use systemd-nspawn's `--network-bridge` to put an nspawn container on it | 17:10:02 |
Linux Hackerman | The trick is to bridge the physical interface as well. | 17:10:13 |
Amanda (she/her) | So I assume I'd want something like networking.bridges.<bridge>.interfaces = ["eth0"] | 17:10:58 |
Linux Hackerman | Bridge = switch, pretty much. You can have a bridge where only the containers are — that way the containers can only talk to each other and the host, and the host needs to do forwarding for them to reach any further | 17:11:19 |
Linux Hackerman | By adding the physical interface, their traffic can go straight to your router | 17:11:37 |
Linux Hackerman | In reply to @amanda:camnet.site So I assume I'd want something like networking.bridges.<bridge>.interfaces = ["eth0"] Yep | 17:11:39 |