Sender | Message | Time |
---|---|---|
2 Mar 2024 | ||
20:15:52 | ||
20:59:00 | ||
21:12:45 | ||
raitobezarius: Thanks for the hardware info. With a single 60-disk megaserver with 1 Gbit/s link, you'd likely bottleneck on bandwidth immediately if many people use it. | 22:14:38 | |
(technically AWS AZ have minimum distance between each other, contrary to other clouds definitions of "AZs", e.g. GCP AFAIK) | 22:15:29 | |
(but that's just my pedantic brain) | 22:15:39 | |
raitobezarius: That's not pedantic, it's a perfectly valid topic. When the OVH fire happened, the DCs were so close that the fire could spread from one to the next. At that time, I checked it for Hetzner. My assessment from the photos is that a fire is unlikely to spread between Hetzner DCs, but the fire brigade might still shut down the whole DC park if one catches fire. So you'd have risk of downtime, but not loss. | 22:18:48 | |
Yeah, the more I look at it, the more I like the rented idea because it enables also smoother ramp up | 22:19:44 | |
there's also some potential value in the foundation not having to manage assets, as opposed to operational costs | 22:22:39 | |
ah fun fact btw https://lists.debian.org/debian-snapshot/2024/02/msg00003.html | 22:23:22 | |
olasd told me "this is what happens when you have 17 architectures used by 3 persons" when I pinged him about that hexa :D | 22:23:54 | |
also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:24:45 | |
* also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:24:49 | |
* also copying what I was saying on the #dev channel to make sure we have everything in one history:
| 22:25:00 | |
(AFAIK nobody has made a proper call on what kind of availability target we'd like to hit, so it's hard to know what kind of HA requirements as well as staffing we'd need) | 22:26:48 | |
to be fair, I'd expect nobody to know | 22:27:05 | |
Arguably, I think the hard metric is the durability one | 22:27:05 | |
Availability is one that matters but with a CDN in front, a lot of stuff can be mitigated | 22:27:21 | |
And during us-east-1 outages, I don't think there was much to be noticed | 22:27:30 | |
I still think that if we run a "hot" / "recent" cache on Hetzner while keeping all the historical stuff on AWS, we can likely decrease the bill by a lot | 22:28:56 | |
In reply to @hexa:lossy.network(it seems a political decision too tbh) | 22:29:27 | |
(how many MB/year are you OK to lose?) | 22:29:34 | |
* ("how many MB/year are you OK to lose?") | 22:29:37 | |
really depends on which MBs you are going to loose 😛 | 22:32:02 | |
<insert meme about the dog "no choose; only lose"> | 22:32:17 | |
In reply to @delroth:delroth.netI don't understand that; the cost on AWS is the historical stuff, because the cost per TB is high on AWS. The other way around the argument would make sense | 22:34:31 | |
the cost on AWS is in large part bandwidth | 22:34:45 | |
(80TB/month) | 22:34:51 | |
I don't have the exact breakdown, but $thousands/month | 22:35:14 | |
I think it's 3K-ish | 22:35:22 |