| 2 Feb 2026 |
raitobezarius | introducing the concept of secrets into nix that way makes me nervous but not sure i have clear arguments on why | 22:16:43 |
raitobezarius | also i still have this feeling that it shouldn't be the invocation's role to tell what is allowed to access a secret or not, because if you are the attacker and you can control that invocation you can simply authorize your own hacked derivation to access the secrets | 22:17:47 |
embr | In reply to @raitobezarius:matrix.org introducing the concept of secrets into nix that way makes me nervous but not sure i have clear arguments on why (my gut reaction to this is "don't put secrets in the nix store" is nice and straightforward, making it more complicated feels like it could easily become a footgun that one day leads to disaster) | 22:18:11 |
raitobezarius | yeah but here it's not a trivial "putting secrets in the nix store" | 22:18:30 |
raitobezarius | if you have a socket in the sandbox and you use it to sign a binary and put a signed binary in the store | 22:18:39 |
raitobezarius | you are not putting secrets in the nix store | 22:18:44 |
raitobezarius | this property is held | 22:18:48 |
raitobezarius | (the socket sandbox can be backed by a PKCS#11 over UDS signer for example) | 22:19:08 |
raitobezarius | obviously if people starts pulling key material in the sandbox that way and accidentally write them in $out | 22:19:27 |
raitobezarius | mama mia | 22:19:28 |
raitobezarius | is this how we make UEFI test keys go in production builds? | 22:19:41 |
WeetHet | Oh, I see, thanks | 22:54:36 |
Jules Lamur | oh that's right, I did not think of that 👍️ | 22:54:47 |
Jules Lamur | Not addressing your point directly, but even if the invocation passes secrets' "references" (ie. files in my previous examples), that does not prevent the actual secrets store from having authorization. For files it's the kernel doing its things, but you could imagine having secrets references with other "fetchers" for example nix-build --secret sec1 bao:foo/bar/baz or something like that, and openbao does it's job checking that the user running the command has access to the secret. | 23:00:27 |
Jules Lamur | If the user is compromised and can run arbitrary commands it's already game over, even if they can't map secrets to derivations? | 23:01:59 |
Jules Lamur | Also, would it be a problem that, by design, these derivations cannot be reproducible? A lot of the nixpkgs ones are not so I guess that would be only a "philosophical problem"? | 23:10:27 |
Jules Lamur | (Reproducible / determinist in the sense that even with the same signature key, the signed binary differs) | 23:14:23 |
raitobezarius | the reproducible problem can be fixed by extending the Nix model | 23:14:31 |
raitobezarius | for example, there could be some sort of special input-addressed derivations which are deterministic modulo verification of the artifacts with a certain public key that needs to be declared | 23:15:04 |
raitobezarius | * for example, there could be some sort of special input-addressed derivations which are deterministic modulo verification of the artifacts with a certain public key that needs to be declared in the drv | 23:15:07 |
raitobezarius | so the outputs is IA + mod public key verification | 23:15:17 |
Jules Lamur | that would be nice | 23:15:34 |
raitobezarius | the problem is that we are having the discussion over an vague/abstract infrastructure that does this signing thing | 23:17:29 |
raitobezarius | sure, there might not be arb exec primitive in the real world, but if you have arb foobar injection and arb drv eval and arb […], maybe you have something equivalent | 23:18:01 |
raitobezarius | additionally, Lix is pretty explicit | 23:18:11 |
raitobezarius | its sandbox is not a security boundary | 23:18:16 |
raitobezarius | https://docs.lix.systems/manual/lix/stable/installation/nix-security.html | 23:19:03 |
raitobezarius |
Nevertheless, the Lix team does not consider multi-user mode a strong security boundary, and does not recommend running untrusted user-supplied Nix language code on privileged machines, even if it is secure to the best of our knowledge at any moment in time.
| 23:19:06 |
raitobezarius | i think zooming out, some things are important:
- being able to verify inputs provenance via signatures (addresses the source code / .drv recipes trust)
- being able to identify a derivation cryptographically as a function of its inputs which are themselves trusted, etc.
- being able to trust that the code that should be executed is executed (i know some people are playing around with Nitro Enclave derivation builders with attestations)
if you can relate these pieces and the final output (i.e. something proof shaped), that's pretty strong evidence that you are not going to sign any random bytes that someone manipulated through all the pipeline layers? | 23:26:03 |
raitobezarius | where you do tradeoffs is usecase dependent | 23:26:08 |