15 Jul 2025 |
shock-wave | Yep its working, thanks to everyone who helped out. | 07:03:44 |
17 Jul 2025 |
Arian | Fun fact. We're gonna hit 1 billion objects in our S3 bucket very soon. we're at 99997034 now | 10:29:18 |
Vladimír Čunát | We shouldn't restart the current nixpkgs:cross-trunk jobs, as the queue-runner gets stuck in a loop and doesn't do anything else:
https://github.com/NixOS/nixpkgs/pull/426071 | 11:35:26 |
hexa | amazing | 11:37:23 |
dgrig | In case anyone else felt the need to double check if there's a limit to the number of objects you can have in a bucket:
There is no max bucket size or limit to the number of objects that you can store in a bucket. You can store all of your objects in a single bucket
via https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html
| 12:26:43 |
emily | You can store all of your eggs in a single basket | 12:27:46 |
hexa | we still need a new release of https://github.com/nix-community/nix-index ideally | 12:32:42 |
dgrig | (and unless that's a bad copy paste, it's missing a digit for billion, it's 100m) | 13:35:17 |
raitobezarius | In reply to @hexa:lossy.network we still need a new release of https://github.com/nix-community/nix-index ideally https://github.com/nix-community/nix-index/releases/tag/v0.1.9 | 14:31:01 |
18 Jul 2025 |
fricklerhandwerk | shameless plug, PR reviews appreciated | 07:24:25 |
fricklerhandwerk | * shameless plug, PR reviews appreciated raitobezarius | 07:24:31 |
19 Jul 2025 |
| Cobalt joined the room. | 19:20:32 |
Cobalt | Hey would the creation of a European nix cache mirror be of interest? $work has somewhat recently had a few hundred TB of free flash storage available after a storage system was decomissioned (and HDDs were sold).
We would plan for a pull through cache, i. e., likely only pull from upstream once for the first request of a file and store on disk afterwards. | 19:27:34 |
K900 | Bandwidth is more of a concern than storage | 19:27:55 |
K900 | For an operation like this | 19:27:57 |
dramforever | is there any reason fastly isn't, well, fastly enough in europe? | 19:28:36 |
emily | my experience is that Fastly maxes out my connection on the second download | 19:28:59 |
emily | and on the first (i.e. not cached at my edge location yet) it's like 500 Mbit/s | 19:29:15 |
dramforever | there a bunch of these kind of mirror sites in china | 19:29:21 |
emily | dunno if limited by Fastly or S3 there | 19:29:22 |
dramforever | but well, china | 19:29:23 |
Cobalt | No, this is more finding a use for the storage. We can't sell/give it away easily and would like to put it to some good use. We have a 40G uplink iirc, with more for DFN | 19:29:39 |
Cobalt | Ipv6 uplink is larger but might be difficult to implement | 19:31:39 |
hexa | we can eventually push hydra results to multiple s3 buckets | 19:33:05 |
hexa | * we can eventually push hydra results to multiple s3 buckets with the new queue-runner | 19:33:10 |
hexa | so we could in theory fan than out | 19:33:22 |
| n4ch723hr3r joined the room. | 19:40:51 |
Cobalt | That sounds interesting but likely a lot more complex for upstream. If fastly is enough than this can likely also be postponed.
Thanks for the info about Hydra though. Maybe that is something to come back to later | 19:44:09 |
20 Jul 2025 |
hexa | loading build 302783248 (nixpkgs:cross-trunk:rpi.mpg123.aarch64-darwin)
queue monitor: error:
… while loading build 302783248:
… while parsing derivation '/nix/store/l43yj5i4g570a79vi4k1n2p2lla85ppg-systemd-minimal-armv6l-unknown-linux-gnueabihf-257.6.drv'
error: attribute 'disallowedReferences' must be a list of strings
| 12:50:13 |
hexa | vcunat: you saw this before, right? | 12:50:20 |