| 18 Dec 2024 |
6pak | because changing the order did change which package was resolved for me | 19:47:34 |
Corngood | on the sandbox thing, how do you copy the output file into nixpkgs? that's one nice thing about using firejail for fetch-deps, update scripts, etc. you can easily give it write access to just the nixpkgs repo | 19:49:26 |
6pak | outer fetch-deps script is the one invoking the sandboxed build | 19:50:14 |
Corngood | I guess you still run an outer script outside of the sandbox? | 19:50:18 |
6pak | yeah | 19:50:23 |
6pak | the protection more is against malicious build scripts in upstream rather than nixpkgs code | 19:50:39 |
6pak | * the protection is more against malicious build scripts in upstream rather than nixpkgs code | 19:50:50 |
Corngood | I wonder if anyone has discussed sandboxing of update scripts anywhere. that's basically the same thing but more widely used | 19:51:16 |
6pak | is nixpkgs-update bot even sandboxed? | 19:52:46 |
Corngood | I'd certainly hope so, but I have no idea where/how it runs. | 19:53:39 |
Corngood | It's at least not going to be on someone's dev machine | 19:53:57 |
6pak | the stupid random order still seems to be a thing https://github.com/NuGet/NuGet.Client/blob/8791d42fb1e7582f9a0b92d1708133c3b138732a/src/NuGet.Core/NuGet.PackageManagement/PackageDownloader.cs#L159 | 20:18:16 |
6pak | so looks like really changing order meant that the first one got started first | 20:18:50 |
6pak | but thats all | 20:18:53 |
6pak | thats so stupid | 20:18:58 |
GGG | I guess that's why nuget lockfiles should be used when you want determinism | 23:59:11 |
| 19 Dec 2024 |
6pak | those don't have the source, just the hash | 06:46:09 |
6pak | so a locked restore will just fail randomly if the random order changes | 06:46:41 |
6pak | that's assuming the package hash is different in the two sources but I think it can happen with the auto signing stuff? | 06:48:16 |
GGG | In reply to @6pak:matrix.org so a locked restore will just fail randomly if the random order changes No, their docs explicitly state that it'll download from the same source based on the hash | 07:36:24 |
GGG | At least that's what they claim:
> Package content mismatch: If the same package (id and version) is present with different content across repositories, then NuGet cannot ensure the same package (with the same content hash) gets resolved every time. It also does not warn/error out in such cases. Using the lock file will help you in resolving to the same versions always.
(from: https://devblogs.microsoft.com/nuget/enable-repeatable-package-restores-using-a-lock-file/) | 07:40:00 |
6pak | actually nuget uses contentHash, which ignores the signature | 12:01:29 |
6pak | which is why, at least in my case the hash was the same for both sources in packages.lock.json | 12:01:53 |
6pak | but different in fetchNupkg | 12:01:57 |
6pak | * but different in fetchNuGet | 12:02:03 |
6pak | from the first glance, that's not what the code does | 12:03:55 |
6pak | but turns out I don't have a test case to check | 12:04:11 |
6pak | can we do the same in nix somehow? otherwise we wont be able to reuse any kind of hashes from nuget metadata | 12:10:30 |
GGG | might be possible if we undo the signature in the fetchurl postPatch step | 12:12:28 |
GGG | only if the hashes match though | 12:12:38 |