NixOS Binary Cache Self-Hosting | 172 Members | |
| About how to host a very large-scale binary cache and more | 59 Servers |
| Sender | Message | Time |
|---|---|---|
| 2 Mar 2024 | ||
| so assuming we do have the hardware, this is a compelling option and if we offer a proper complete proposal to the foundation and operations make sense and infrastructure agrees to it, it could be adopted I guess | 02:18:24 | |
| * so assuming we do have the hardware, this is a compelling option and if we offer a proper complete proposal to the foundation and operations make sense and infrastructure folks agrees to it, it could be adopted I guess | 02:18:30 | |
| and to give an example of hardware costs, I guess a netapp de6600 60x3.5" could cost something like 800EUR, you can fit 60×20TB disk in there, so 1.2PB raw capacity, 60 disks of 20TB will cost approximately 440€*60 ~ 26.4K EUR at list price, obviously, so something like 27K EUR, this can be spread over the 2 locations to avoid having 1.2PB × 2 needlessly (and the rest can be filled as the cache grows organically) | 02:24:08 | |
| I'm ignoring server costs because honestly you can find a R730 in a trash bin, put enough SAS cards and plug the JBOD in | 02:24:30 | |
| tough question is whether flash is needed at all or not | 02:24:41 | |
| if so, this can add 5K-20K to the proposal | 02:24:55 | |
| actually hetzner seems to have proper connection options: https://docs.hetzner.com/robot/colocation/pricing | 02:32:47 | |
| they were just hidden | 02:32:48 | |
| 19:19:47 | ||
| 19:27:04 | ||
| 20:15:52 | ||
| 20:59:00 | ||
| 21:12:45 | ||
| raitobezarius: Thanks for the hardware info. With a single 60-disk megaserver with 1 Gbit/s link, you'd likely bottleneck on bandwidth immediately if many people use it. | 22:14:38 | |
| (technically AWS AZ have minimum distance between each other, contrary to other clouds definitions of "AZs", e.g. GCP AFAIK) | 22:15:29 | |
| (but that's just my pedantic brain) | 22:15:39 | |
| raitobezarius: That's not pedantic, it's a perfectly valid topic. When the OVH fire happened, the DCs were so close that the fire could spread from one to the next. At that time, I checked it for Hetzner. My assessment from the photos is that a fire is unlikely to spread between Hetzner DCs, but the fire brigade might still shut down the whole DC park if one catches fire. So you'd have risk of downtime, but not loss. | 22:18:48 | |
| Yeah, the more I look at it, the more I like the rented idea because it enables also smoother ramp up | 22:19:44 | |
| there's also some potential value in the foundation not having to manage assets, as opposed to operational costs | 22:22:39 | |
| ah fun fact btw https://lists.debian.org/debian-snapshot/2024/02/msg00003.html | 22:23:22 | |
| olasd told me "this is what happens when you have 17 architectures used by 3 persons" when I pinged him about that hexa :D | 22:23:54 | |
| also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:24:45 | |
| * also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:24:49 | |
| * also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:25:00 | |