Sender | Message | Time |
---|---|---|
6 May 2025 | ||
Sorry - only seeing your messages now. I believe a fix for this does exist in the wild, I vaguely remember running into it a few years ago. Let me do some digging | 20:36:18 | |
In the mean time netpleb - can you provide the following info from within the container:
| 20:40:38 | |
Ah, I see you already found the relevant ticket on GitHub. Did you try this fix? | 20:42:27 | |
Thanks for your reply and for helping to figure this out. I did try that fix that you mentioned, as well as this one but neither have done the trick. I will the output of those commands for you now. | 21:03:08 | |
here's the redacted output (first time using local instance of ollama to do redacting!):
| 21:09:35 | |
Interesting. FWIW, I personally used to use Bind + RFC2136 for renewals. It was not in a container though. The service ordering looks correct, with bind listed as a dependency of acme-example.com.service. | 21:27:43 | |
What error is lego itself throwing during renewal? | 21:28:22 | |
Thanks. Yes, I think it probably works fine when not in a container, but alas my use case is within a container :-/. I will get the exact error for you in a moment, but in essence it is something like this: Could not create client: get directory at ‘https://acme-v02.api.letsencrypt.org/directory’: Get “https://acme-v02.api.letsencrypt.org/directory”: dial tcp: lookup acme-v02.api.letsencrypt.org 1: Temporary failure in name resolution it tries to do that 6 times I think before timing out. Interestingly, during this process though I cannot ping anything (much less lookup host names). | 21:34:17 | |
But this is why it seems to be something weird about how the host deals with the container...I think what happens is that when the acme stuff is present in the container, it causes the boot process for the container to be drawn way out longer than it should be (hence why we are discussing here), but because the boot process is drawn out the container has not reached whatever stage it is supposed to get to for the host to install the routes. | 21:36:05 | |
I'll see if I can put together a test suite for this when I next get a moment to investigate it. Not sure what the problem is right now, sorry I can't be more help | 22:08:20 | |
Thanks for looking into it. It was driving me mad, so I stopped yesterday after putting the non-clean solution of a 20s start timeout on the acme services. | 22:11:19 | |
It might be worth poking around with resolvectl/systemd-resolved and see if something fishy is happening. The nspawn containers do funky things with the hosts file and nameserver setup, could be conflicting with bind | 22:14:41 | |
thanks, I've been poking at that a bit. Will let you know if anything comes of it. | 22:36:06 | |
9 May 2025 | ||
I have good news!! The issue is finally resolved. It turned out to be a much different problem than originally expected: ipv6 link local addressing was the cuplrit. Even though I had networking.enableIPv6 = false on both the host and the machine, systemd-network-wait-online was not reaching its target because systemd-network was trying to assign link local ipv6 addresses. Setting systemd.network.networks."eth0".networkConfig.LinkLocalAddressing = "no"; in my container config seemed to do the trick. | 21:47:12 | |
* I have good news!! The issue is finally resolved. It turned out to be a much different problem than originally expected: ipv6 link local addressing was the cuplrit. Even though I had networking.enableIPv6 = false on both the host and the container, systemd-network-wait-online was not reaching its target because systemd-network was trying to assign link local ipv6 addresses. Setting systemd.network.networks."eth0".networkConfig.LinkLocalAddressing = "no"; in my container config seemed to do the trick. | 21:54:28 | |
10 May 2025 | ||
you can also configure systemd-network-wait-online to wait for either ipv4 or ipv6 | 07:19:36 | |
wait why did it fail to assign a link local address | 07:20:02 | |
that is the weird part here :P | 07:20:08 | |
link local addressing should be… instant | 07:20:17 | |
In reply to @netpleb:matrix.orgGlad you figured it out :D What a weird one, I wouldn't have thought of ipv6 link local being the issue. | 12:15:20 | |
In reply to @arianvp:matrix.orgIt might not necessarily be an assignment issue, but rather a routing issue. With my time on RFC108 I've observed some strange stuff with nspawn networking | 12:15:55 | |
11 May 2025 | ||
I am not sure of what the root cause is (I am not an expert in this stuff and had to learn a bunch about systemd-network to even get this far), but all I know is that once I finally whittled it down to the smallest possible config that still worked correctly and then removed the Who knows. I am just happy it finally works! Now the container boots typically 11 seconds (including checking certs and such) instead of the multiple minutes it was taking before. | 02:47:22 | |
regardless, thank you all here for your help! | 02:47:58 | |
* I am not sure of what the root cause is (I am not an expert in this stuff and had to learn a bunch about systemd-network to even get this far), but all I know is that once I finally whittled it down to the smallest possible config that still worked correctly and then removed the Who knows. I am just happy it finally works! Now the container boots typically 11 seconds (including checking certs and such) instead of the multiple minutes it was taking before. | 02:48:56 | |
* I am not sure of what the root cause is (I am not an expert in this stuff and had to learn a bunch about systemd-network to even get this far), but all I know is that once I finally whittled it down to the smallest possible config that still worked correctly and then removed the Who knows though. I am just happy it finally works! Now the container boots typically 11 seconds (including checking certs and such) instead of the multiple minutes it was taking before. | 02:49:18 | |
15 May 2025 | ||
Any chance of seeing this one merged soonish? https://github.com/NixOS/nixpkgs/pull/376334 | 20:30:23 | |
16 May 2025 | ||
m1cr0man: in principle yes, but shouldn't the assert look at more options to to check domain && keyType || csr? | 09:16:10 | |
* m1cr0man: in principle yes, but shouldn't the assert look at more options to check domain && keyType || csr ? | 09:16:27 | |
because right now they're silently unused when a csr get configuredt | 09:17:04 | |
hm, domain is the key in the attrset, so maybe not | 09:25:17 |