24 Aug 2023 |
flokli | This stuff heavily depends on how good your caching in front of this is | 10:12:16 |
flokli | Of course, if we don't cache and always reach out to S3 to always assemble stuff it'll be more requests, no surprise | 10:12:40 |
flokli | But if you can keep the hot NARs in fastly or whatever caching layer we use, hopes are the less frequent, but much nicer deduped NAR contents will outweigh the > number of requests you need to reassemble less hot paths, if you need to. | 10:13:37 |
flokli | I'd probably even go with our own caching layer in between, so we have a bit more control over it. | 10:13:57 |
@linus:schreibt.jetzt | You can also reduce the amount of requests (but also the space savings) by adjusting the chunking parameters | 10:13:58 |
@linus:schreibt.jetzt | and wouldn't it make more sense to cache chunks than assembled nars? | 10:14:29 |
flokli | It depends on how smart your edge is | 10:14:47 |
flokli | if your NAR-assembling thing is running at the edge, caching the chunks before assembly, rather than the assembled NARs might work. | 10:15:18 |
flokli | But there's usually some limits on how many requests a single request can generate (looking at you, cloudflare) | 10:15:36 |
flokli | so YMMV | 10:15:41 |
@linus:schreibt.jetzt | oh right, yeah, if it's just a CDN without extra smarts that makes sense | 10:15:56 |
@linus:schreibt.jetzt | but depending on how costly requests are, it could make sense to have caching both behind and in front of the assembly bit | 10:16:28 |
flokli | I still think you should optimize for disk storage on S3 / $backingStore, so small chunk sizes, because everything in the front is cache-able. | 10:16:33 |
@linus:schreibt.jetzt | at a large scale at least | 10:16:34 |
| BMG joined the room. | 14:55:34 |
BMG | Hey, I've been looking into the binary cache protocol today and have noticed that once you do a copy, you can never update the narinfo again. If you sign the path locally with a new key and want to push it, well you can't. | 15:02:47 |
BMG | Found these related issues https://github.com/NixOS/nix/issues/4221 https://github.com/NixOS/nix/issues/7562 | 15:02:56 |
BMG | Am I right? | 15:03:03 |
@linus:schreibt.jetzt | BMG: there's a dedicated nix store copy-sigs command, I wonder if that works? | 15:08:54 |
BMG | not that i've been able to make work yet | 15:11:03 |
@linus:schreibt.jetzt | ok, then I'm not sure. But yeah there are a lot of weird things about narinfos in flat-file binary caches | 15:11:45 |
@linus:schreibt.jetzt | (also paths only being able to have one deriver is weird in general) | 15:12:22 |
BMG | In reply to @linus:schreibt.jetzt BMG: there's a dedicated nix store copy-sigs command, I wonder if that works? It seems to be geared towards copying signatures from a remote store into your local store. Can't see a way of copying from local to remote | 15:12:45 |
@linus:schreibt.jetzt | might work if you pass --store file:///... | 15:13:18 |
@linus:schreibt.jetzt | (or s3:/// or whatever, as appropriate) | 15:13:27 |
BMG | That means copying from that store into your local. I'm looking at updating a remote cache after i've signed something again locally | 15:13:55 |
@linus:schreibt.jetzt | no, --store is the "destination" store | 15:14:19 |
BMG | Well I don't have a use case, just wanted to confirm that uploading a narinfo is a one and done action. You have to remove it remotely in order to upload again | 15:14:20 |
@linus:schreibt.jetzt | --substituter is where it's copied from | 15:14:33 |
BMG |  Download image.png | 15:14:40 |