| 13 Feb 2024 |
raitobezarius | * 60-80TB/mo if my memory serve me well | 15:13:22 |
flokli | For now I don't think we can solve both problems, let's solve the bucket problem first | 15:14:01 |
flokli | Propagating some metadata, and being a bit smarter with the traffic at the CDN level is also something we can tackle, but that requires smarter clients | 15:14:38 |
| 14 Feb 2024 |
edef | so: flipside to Backblaze | 10:27:55 |
edef | they'll cover our egress costs if we commit to them, and they are vastly cheaper https://twitter.com/JakeDChampion/status/1757508820689973627 | 10:28:08 |
edef | and we get free bandwidth to Fastly | 10:28:26 |
edef | $36k/yr = $3k/mo, and no more bandwidth charges | 10:28:46 |
| 15 Feb 2024 |
hexa | I have four remote builders. Is there a simple way to have them share their store between each other for substitutiion? | 03:48:49 |
Jonas Chevalier | https://github.com/cid-chan/peerix maybe? I haven't tested it out yet | 09:08:38 |
Jonas Chevalier | or setup https://github.com/nix-community/harmonia on each node and configure the caches if they are static? | 09:10:28 |
@linus:schreibt.jetzt | In reply to @hexa:lossy.network I have four remote builders. Is there a simple way to have them share their store between each other for substitutiion? I think adding the others to substituters as ssh-ng stores would work, but I'm not sure how well (might perform terribly) | 11:53:30 |
hexa | thanks! peerix sounds like the simplest solution, if it (still) works? | 12:03:38 |
hexa | * thanks! peerix sounds like the simplest solution, if it (still) works. | 12:03:40 |
hexa | but seems to suffer from timeouts | 12:04:42 |
| a-kenji joined the room. | 19:15:09 |
| 19 Feb 2024 |
| rhizomes joined the room. | 04:28:19 |
| 20 Feb 2024 |
| Sofie changed their display name from Sofi to Sofie. | 07:39:16 |
| Sofie changed their profile picture. | 14:39:13 |
| Sofie changed their profile picture. | 14:41:03 |
| Sofie changed their profile picture. | 14:42:35 |
| Sofie changed their profile picture. | 14:43:54 |
| 23 Feb 2024 |
| Wanja Hentze joined the room. | 12:22:33 |
| dritonr joined the room. | 12:27:33 |
| 29 Feb 2024 |
| kip93 joined the room. | 14:33:52 |
| 1 Mar 2024 |
| patka joined the room. | 08:10:16 |
| fgaz joined the room. | 09:27:14 |
| 2 Mar 2024 |
| nh2 joined the room. | 01:42:07 |
nh2 | raitobezarius: OK I joined. Copying my message from the other channel, for context for others:
You were looking at dedicated-hosting the binary cache regarding the AWS cost sink. I run 1 PB of CephFS clusters on Hetzner, and can set that up quite easily with NixOps. Do you want to team up on this topic?
My feeling is we could Host the 500 TB of binary cache on 6x SX134 servers (960 TB raw), with EC 4+2, which provides 580 TB HA storage. With 10 Gbit/s Internet.
For 6 * 245 = 1470 EUR/month.
To transfer out, with AWS Snowball: If we can content-deduplicate the 500 TB by factor 2x to 250 GB (using attic or bupstash as shown on https://github.com/NixOS/nixpkgs/issues/89380) the one-off cost to ship to Germany would be ~12k EUR or so.
| 01:46:11 |
raitobezarius | Yes, so the problem is that the Foundation doesn't have 1470EUR/mo | 01:46:27 |
raitobezarius | Even if we offsetted the Snowball, it's further unclear | 01:46:48 |