| 13 Jan 2023 |
hexa | last time we wanted an offline solution for the expiry check the upstream wasn't very forthcoming | 20:10:29 |
m1cr0man | well, we had a bit of a falling out XD I think it would require the work to be done by us. We must be one of lego's largest users though | 21:45:40 |
| 14 Jan 2023 |
Andreas Schrägle | Why did we decide for lego btw, instead of any of the other clients?
I know we used to use a different one, but I've never really looked into acme clients much. | 14:28:08 |
hexa | we used simp_le before | 15:31:18 |
hexa | I think it couldn't do DNS01 | 15:31:23 |
hexa | https://web.archive.org/web/20180603040716/https://github.com/NixOS/nixpkgs/issues/34941 | 15:37:25 |
hexa | this the original discussion, started by volth and since deleted … thanks github | 15:37:36 |
hexa | https://github.com/NixOS/nixpkgs/pull/77578 | 15:39:52 |
hexa | and the migration PR | 15:39:54 |
m1cr0man | I inherited the work on DNS-01 and assumed that some decision had been made to use lego, and didn't attempt to change it | 17:02:29 |
| 21 Jan 2023 |
K900 | The tests are failing again :( https://hydra.nixos.org/build/206158453/nixlog/98 | 15:03:29 |
hexa | dumped https://gist.github.com/mweinelt/cb4460149479878316b46c116518c88f | 21:30:39 |
hexa | so I can restart | 21:30:45 |
hexa | ah, it already was | 21:31:06 |
hexa | K900: did you see the error? | 21:33:36 |
hexa |
(finished: must succeed: curl --data '{"host": "acme.test", "addresses": ["192.168.1.1"]}' http://192.168.1.3:8055/add-a, in 0.24 seconds) client # curl: (7) Failed to connect to acme.test port 15000 after 88 ms: Couldn't connect to server client # curl: (7) Failed to connect to acme.test port 15000 after 88 ms: Couldn't connect to server
| 21:42:52 |
hexa | nah, looks like that completed | 21:47:17 |
| 22 Jan 2023 |
K900 | It got oomkilled I think | 08:00:24 |
K900 | So I just restarted it | 08:00:29 |
m1cr0man | In reply to @hexa:lossy.network
(finished: must succeed: curl --data '{"host": "acme.test", "addresses": ["192.168.1.1"]}' http://192.168.1.3:8055/add-a, in 0.24 seconds) client # curl: (7) Failed to connect to acme.test port 15000 after 88 ms: Couldn't connect to server client # curl: (7) Failed to connect to acme.test port 15000 after 88 ms: Couldn't connect to server
Yeah that looks fine | 18:53:19 |
m1cr0man | I suppose OOMkill could be the culprit actually.. this test starts like 4 vms iirc. Client/dnsserver/webserver/acme server. I don't imagine many other tests have as many VMs | 18:54:50 |
hexa | maybe allocate more memory for the test | 19:34:30 |
hexa | * maybe allocate more memory for the test then | 19:34:36 |
m1cr0man | is that possible? | 22:06:26 |
| 31 Jan 2023 |
Winter (she/her) | In reply to @m1cr0man:m1cr0man.com is that possible? virtualisation.memorySize, bytes. (default is 1024.) | 00:53:58 |
m1cr0man | Does that increase the ram for each node or for the encapsulating VM running the suite? | 01:03:30 |
Winter (she/her) | In reply to @m1cr0man:m1cr0man.com Does that increase the ram for each node or for the encapsulating VM running the suite? There's no encapsulating VM. Each node is run as its own VM. | 01:07:43 |
m1cr0man | Right I see, see I think the issue is that whatever the test suite is running on is running out of ram. | 01:08:21 |
Winter (she/her) | let me poke the operator of that specific machine | 01:09:01 |
m1cr0man | I already did that 103-run test a while ago and it was grand so I don't think the nodes are running out | 01:09:12 |