Sender | Message | Time |
---|---|---|
7 Jun 2025 | ||
exactly | 22:43:55 | |
* the control plane is not equipped to handle a huge number of packets, so rate limiting kicks in | 22:44:37 | |
luckily you cannot turn it off for IPv6 completely without breaking some things | 22:44:59 | |
you absoutely can turn off icmpv6 echo requests | 22:45:17 | |
* you absolutely can turn off icmpv6 echo requests | 22:45:21 | |
but breaking neighbor discovery and path mtu discovery is where things break | 22:45:56 | |
* but blocking neighbor discovery and path mtu discovery is where things break | 22:46:00 | |
* you absolutely can turn off icmpv6 echo responses | 22:46:12 | |
* you absolutely can turn drop icmpv6 echo requests | 22:46:20 | |
23:36:18 | ||
8 Jun 2025 | ||
00:17:57 | ||
* you absolutely can drop icmpv6 echo requests | 15:37:33 | |
10 Jun 2025 | ||
I’m trying to understand our caching setup a bit better with Fastly<->S3, given bandwidth between S3 and fastly is our largest cost. I noticed we’re not serving any Do I understand correctly that https://github.com/NixOS/infra/blob/88f1c42e90ab88673ddde3bf973330fb2fcf23be/terraform/cache.tf#L138C17-L138C22 is the only thing configuring how long we hold things in cache? (seems to be 24 hours). Given we also cache 404s on narinfos I guess that makes sense. (In case the narinfo gets uploaded later it invalidates it). But can’t we cache NARs way more aggressively than 24 hours? Would reduce the bandwidth on S3 perhaps. | 11:05:09 | |
* I’m trying to understand our caching setup a bit better with Fastly<->S3, given bandwidth between S3 and fastly is our largest cost. I noticed we’re not serving any Do I understand correctly that https://github.com/NixOS/infra/blob/88f1c42e90ab88673ddde3bf973330fb2fcf23be/terraform/cache.tf#L138C17-L138C22 is the only thing configuring how long we hold things in cache? (seems to be 24 hours). Given we also cache 404s on narinfos I guess that makes sense as we want them to be fast. ( and In case the narinfo gets uploaded later it invalidates it). But can’t we cache NARs way more aggressively than 24 hours? Would reduce the bandwidth on S3 perhaps. | 11:06:41 | |
* I’m trying to understand our caching setup a bit better with Fastly<->S3, given bandwidth between S3 and fastly is our largest cost. I noticed we’re not serving any Do I understand correctly that https://github.com/NixOS/infra/blob/88f1c42e90ab88673ddde3bf973330fb2fcf23be/terraform/cache.tf#L138C17-L138C22 is the only thing configuring how long we hold things in cache? (seems to be 24 hours). Given we also cache 404s on narinfos I guess that makes sense as we want them to be fast. ( and In case the narinfo gets uploaded later it invalidates it). But can’t we cache NARs way more aggressively than 24 hours? Would reduce the bandwidth on S3 perhaps. | 11:08:36 | |
I guess even for 200 OK narinfos we could set Cache-Control: immutable . Just not for 404 s | 11:11:06 | |
FWIW I don't know how Fastly's cache expiration works but it's possible that longer caching could make things meaningfully faster too. I've noticed that fetching an ISO NAR from the cache goes at ~500 Mbit/s the first time and then maxes out my connection after that (last I checked the HTTP headers imply that it's already cached on Fastly but just not at my edge location but they don't seem to update right so I'm not sure if I should trust that) | 11:18:29 | |
no idea if the cap is Fastly–Fastly or Fastly–S3 or what, but just throwing it out there | 11:19:14 | |
IDK if S3 supports setting Cache-Control: immutable on objects. But it for sure does support Cache-Control: max-age=XXX . We could also override the TTL for /nar path in VCL to increase the max-age to the maximum value that Fastly supports (seems to be a year) | 11:28:32 | |
* IDK if S3 supports setting Cache-Control: immutable on objects. But it for sure does support Cache-Control: max-age=XXX . We could also override the TTL for /nar path in VCL to increase the max-age to the maximum value that Fastly supports (seems to be a year). Because setting this on the S3 level would require changes to Nix to set those as request headers when uploading To s3. | 11:29:02 | |
In reply to @arianvp:matrix.orgThat makes it very hard to update the contents, if we want to roll out new keys, or update nar paths to point elsewhere | 11:34:49 | |
(probably low value for narinfos anyway considering how small they are and it not helping 404 latency?) | 11:42:30 | |
In reply to @flokli:matrix.orgokay so not for narinfos. But for NARs this seems totally safe right? | 11:47:46 | |
theoretically, a drv hash does not uniquely identify the built results. however I heard that rebuilding something in the cache without changing drv hash is not something that can feasibly be done right now, so I assume the risk is very low | 11:48:30 | |
maybe there could be an issue if a NAR has legal problems and we need to take it down? but I have to assume Fastly has knobs to purge stuff manually if we have to | 11:48:53 | |
Fastly can purge yes | 11:49:04 | |
https://github.com/NixOS/infra/pull/727/files hypothetical proposal | 12:30:41 | |
heads up – https://github.com/NixOS/nixpkgs/pull/415566 we're expecting on the Darwin team end that we'll want to turn off x86_64-darwin on the jobsets sometime between after 26.05 branch-off and the release of 28.11, most likely after either 26.05 or 27.05 branch-off | 13:06:07 | |
(if it's around branch-off, then for the unstable branch only of course, until the end of the support period for the branched-off release) | 13:06:38 | |
That will help staging* iterations quite a bit, I expect. | 13:12:27 |