!MthpOIxqJhTgrMNxDS:nixos.org

NixOS ACME / LetsEncrypt

93 Members
Another day, another cert renewal43 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
4 Sep 2023
@raitobezarius:matrix.orgraitobezariusLarge features like this are often blocked because everyone is paralyzed by it not being "finalized"12:59:20
@raitobezarius:matrix.orgraitobezarius
In reply to @os:matrix.flyingcircus.io
I'm supportive of that. But as I said, I won't be the one writing that C code, but could be the one solving this as I had done in the PR with the lowest footprint I could do.
Understandable
12:59:33
@raitobezarius:matrix.orgraitobezariusEither case, I just wanted to put on the balance the both (valid IMHO) approaches13:00:25
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
Giving a pass to ACME is probably fine because of the importance
One thing I could have proposed as a compromise would've been adding some custom hooks into the service logic which we could fill with locking logic downstream. But maybe we can get a proper solution in in-time.
13:01:26
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
Therefore I don't think there's an emergency beyond ACME large users (you and some folks, including me)
If it was an emergency, I wouldn't be targeting the next stable release ;)
13:01:55
@m1cr0man:m1cr0man.comm1cr0manThe closest we get to a systemd based solution is my PR. My real question here Oliver is, is there something in your PR that my one does not provide at a functional level? Personally, adding complexity to the renew script itself is something I actively try to avoid. I also add tests for any new features to avoid future regressions if someone attempts to optimise the module. As for a custom hook - if that's acceptable for your case you actually can do that already 😁 just create a service which is requiredby + before the renew service to handle the lock13:02:51
@raitobezarius:matrix.orgraitobezarius
In reply to @os:matrix.flyingcircus.io
If it was an emergency, I wouldn't be targeting the next stable release ;)
As a release manager, 24.11 is very soon in my brain :p
13:03:49
@raitobezarius:matrix.orgraitobezarius23.11 is basically done13:04:02
@raitobezarius:matrix.orgraitobezarius24.05 will start soon (tm)13:04:10
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
I do keep 20ish patches for my own infra for a large infra, I am not sure if you are targeting stable or unstable
We're running stable releases. I am not sure whether we'd want all of our machines to run with a canary-systemd (🥶) but we might be able to do this for one of the few especially domain-rich machines – if there is a perspective of this helping things going upstream soonish.
13:04:33
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
23.11 is basically done
Is there anything I could've done better for upstreaming? On the one hand I was eager to implement something already i May, on the other hand I went the extra mile to go through your alternative proposals first.
13:06:12
@raitobezarius:matrix.orgraitobezariusYeah I think the issue is that systemd development is faster with someone who is close to systemd, I had the same issues and decided to bite the bullet to avoid things lingering for too long13:06:50
@raitobezarius:matrix.orgraitobezariusIt's hard to do better than what you have done and I am happy you went through all the thousands cuts13:07:09
@raitobezarius:matrix.orgraitobezariusI'd be happy taking your concurrency subject to Poettering and co to get it merged13:07:39
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
It's hard to do better than what you have done and I am happy you went through all the thousands cuts
Good to hear, because I know a person who wasn't that intrigued that I eagerly jumped through all those hoops ^^
13:08:06
@raitobezarius:matrix.orgraitobezariusThe problem is having someone responsive for the C bits but I can try to pick up the pieces and see what is left to do13:08:07
@raitobezarius:matrix.orgraitobezarius
In reply to @os:matrix.flyingcircus.io
Good to hear, because I know a person who wasn't that intrigued that I eagerly jumped through all those hoops ^^
It's usually very hard to please folks in open source development
13:08:31
@raitobezarius:matrix.orgraitobezarius
In reply to @os:matrix.flyingcircus.io
We're running stable releases. I am not sure whether we'd want all of our machines to run with a canary-systemd (🥶) but we might be able to do this for one of the few especially domain-rich machines – if there is a perspective of this helping things going upstream soonish.
If I have something on this end, I will be sure to ping you as a guinea pig :p
13:09:00
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
The problem is having someone responsive for the C bits but I can try to pick up the pieces and see what is left to do
Depending on whether the PR will progress again, you could take https://github.com/systemd/systemd/pull/27985#issuecomment-1621702189 as a starting point
13:09:51
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @m1cr0man:m1cr0man.com
The closest we get to a systemd based solution is my PR. My real question here Oliver is, is there something in your PR that my one does not provide at a functional level? Personally, adding complexity to the renew script itself is something I actively try to avoid. I also add tests for any new features to avoid future regressions if someone attempts to optimise the module.

As for a custom hook - if that's acceptable for your case you actually can do that already 😁 just create a service which is requiredby + before the renew service to handle the lock
As I wrote in my comparison, the flock approach provides the concurrency guarantees in a broader scenario of cases.
As most of you are worried of added complexity, in the end it's just a decision on which approach you as the maintainers (not just m1cr0man because of course you as the implementor understand what's happening there ;) )you feel more comfortable with and which you understand better.
13:21:37
@os:matrix.flyingcircus.ioosnyx (he/him)When it comes to (not) modifying the service script, let me argue that my change barely counts as such a modification from the semantical level. It's just an optional wrapper around the otherwise unmodified service script.13:23:55
@os:matrix.flyingcircus.ioosnyx (he/him) My take on the "let's solve it with systemd unit options alone" approach is just the idea that we must be careful to not fall into the when all you want to use is a systemd-253 hammer, everything looks like a unit option hammer.
It might be a hammer you know, but that hammer bight also just be adding things to the evergrowing list of interwoven systemd unit relationships…
13:28:14
@os:matrix.flyingcircus.ioosnyx (he/him)But that's just one perspective on it, I'm not interested in NIH but just explaining and – if they turn out to be solid possibly defend – my implementation decisions (=13:29:43
@os:matrix.flyingcircus.ioosnyx (he/him)
In reply to @raitobezarius:matrix.org
I'd be happy taking your concurrency subject to Poettering and co to get it merged
So can I read this as a clear "we'll postpone the fix until systemd is ready" from your side and you won't even merge the m1cr0man PR? Then I need to prepare myself for maintaining a downstream acme fork.
If yes, can I rely on you to push this forward?
13:33:52

Show newer messages


Back to Room ListRoom Version: 6