| 4 Apr 2025 |
emily | I think they prefer a permanent non-shallow clone that gets fetched normally, or tarball downloads | 11:20:14 |
Mic92 | Ok. but why is there CI not downloading tarballs? | 13:02:59 |
emily | no idea :) but I guess CI probably runs a lot less than people hit package indexes | 13:06:03 |
Mic92 | I think it also makes a big difference if you just do this for nixpkgs instead of many small repos | 13:06:34 |
Mic92 | Which I think is what CocoaPods is doing | 13:07:11 |
Alyssa Ross | CocoaPods at the time was one big repo | 13:08:06 |
Alyssa Ross | I'd imagine it's a lot more expensive with one big repo than with many small ones | 13:08:30 |
Alyssa Ross | https://blog.cocoapods.org/Master-Spec-Repo-Rate-Limiting-Post-Mortem/ | 13:08:59 |
Mic92 | Also the question is, if those issues still persist. This was 2016. They have completely revamped their internal git implementation and make it distributed. | 13:15:03 |
Mic92 | Because I don't see any performance degradation when fetching stuff this way. | 13:16:05 |
Mic92 | If not, I have even more a reason to fetch my git repository from elsewhere. | 13:17:24 |
Mic92 | Just checked that cocoapods has 20 times the files of nixpkgs | 13:20:07 |
emily | feel like hydra distributing nixexprs is still what makes the most sense after all this time :) | 14:05:05 |
Mic92 | Maybe for some stuff but for my dotfiles, where I am hacking on my nixpkgs fork, the shallow clone is many times faster if I do single commits for bug fixes. | 14:06:19 |
Mic92 | It should be even cheaper for github to compute this delta over serving me the whole nixpkgs all over again. | 14:07:32 |