| 3 Apr 2025 |
emily | GitHub really doesn't like people doing shallow clones of the same repo repeatedly, fwiw. (they've had other package managers change their update mechanism because of that in the past) | 19:22:16 |
jade_ | i would never recommend this over a tarball input. if it's a nix bug, nix should fix it. | 20:44:34 |
jade_ | * i would never recommend this over a tarball input. if it's a nix bug, nix should fix it. git clone is very slow and resource intensive compared to a tarball fetch such as the github: flake URL scheme. | 20:45:00 |
| 4 Apr 2025 |
| redrield joined the room. | 00:17:02 |
| mjolnir unbanned @cafkafk:fem.gg. | 06:12:58 |
Mic92 | Shallow clones are significantly faster for me than downloading tarballs and it's not slower than downloading tarballs for an initial clone. | 10:56:02 |
Mic92 | * Shallow clones when updating are significantly faster for me than downloading tarballs and it's not slower than downloading tarballs for an initial clone. | 10:56:14 |
Mic92 | No it's not the default because git inputs have a revcount flag that cannot be computed when doing a shallow clone. | 10:58:58 |
teto | do you know a reference talking about this : is it a CPU vs bandwidth tradeoff (shallow clones being more CPU intensive) ? | 11:01:14 |
teto | is revcount that useful ? I never used that I think | 11:01:38 |
Mic92 | Github uses them in their official checkout action ... | 11:01:49 |
Mic92 | Agreed but removing it would change evaluation of existing flake.locks | 11:02:31 |
emily | I think it was CocoaPods or Homebrew or both | 11:19:42 |
emily | I believe serving shallow clones is expensive, I guess because it is the CPU cost of the Git protocol without the network savings? | 11:19:59 |
emily | I think they prefer a permanent non-shallow clone that gets fetched normally, or tarball downloads | 11:20:14 |
Mic92 | Ok. but why is there CI not downloading tarballs? | 13:02:59 |
emily | no idea :) but I guess CI probably runs a lot less than people hit package indexes | 13:06:03 |
Mic92 | I think it also makes a big difference if you just do this for nixpkgs instead of many small repos | 13:06:34 |
Mic92 | Which I think is what CocoaPods is doing | 13:07:11 |
Alyssa Ross | CocoaPods at the time was one big repo | 13:08:06 |
Alyssa Ross | I'd imagine it's a lot more expensive with one big repo than with many small ones | 13:08:30 |
Alyssa Ross | https://blog.cocoapods.org/Master-Spec-Repo-Rate-Limiting-Post-Mortem/ | 13:08:59 |
Mic92 | Also the question is, if those issues still persist. This was 2016. They have completely revamped their internal git implementation and make it distributed. | 13:15:03 |
Mic92 | Because I don't see any performance degradation when fetching stuff this way. | 13:16:05 |
Mic92 | If not, I have even more a reason to fetch my git repository from elsewhere. | 13:17:24 |
Mic92 | Just checked that cocoapods has 20 times the files of nixpkgs | 13:20:07 |
emily | feel like hydra distributing nixexprs is still what makes the most sense after all this time :) | 14:05:05 |
Mic92 | Maybe for some stuff but for my dotfiles, where I am hacking on my nixpkgs fork, the shallow clone is many times faster if I do single commits for bug fixes. | 14:06:19 |
Mic92 | It should be even cheaper for github to compute this delta over serving me the whole nixpkgs all over again. | 14:07:32 |
tea | In reply to @qyliss:fairydust.space https://blog.cocoapods.org/Master-Spec-Repo-Rate-Limiting-Post-Mortem/ If I understand it correctly, the problem here was that cocoa initially did a shallow clone but then upgraded that to a full clone, which is expensive
https://github.com/CocoaPods/CocoaPods/issues/4989#issuecomment-193772935
| 15:48:46 |