| 10 Dec 2025 |
KFears (they/them) | In reply to @piegames:flausch.social Due to unfortunate design decisions made (or rather, not made) back when I was in Kindergarten, parsing is on the critical path for evaluation time Is that because parsing is so unbearably slow, or is there a more cursed reason? | 17:18:25 |
Qyriad | In reply to @piegames:flausch.social Due to unfortunate design decisions made (or rather, not made) back when I was in Kindergarten, parsing is on the critical path for evaluation time we would like to see benchmarks on this… we'd honestly sooner suspect filesystem IO as a bottleneck over parse-time | 17:18:48 |
Qyriad | In reply to @piegames:flausch.social Due to unfortunate design decisions made (or rather, not made) back when I was in Kindergarten, parsing is on the critical path for evaluation time * we would like to see benchmarks/profiles on this… we'd honestly sooner suspect filesystem IO as a bottleneck over parse-time | 17:19:17 |
Qyriad | (having looked at profiles of eval before, but not in a while) | 17:19:29 |
piegames | no, parsing is super fast, mostly thanks to horrors, it's just that Nixpkgs is 3.6MLOC over 40k files and every single NixOS eval loads a good chunk of that, realistically multiple times. The main issue is that caching was never a thought in the architecture and adding it afterwards is really tricky | 17:21:27 |
Qyriad | don't forget that every single Nixpkgs package loads hundreds if not thousands of other nix files | 17:22:24 |
piegames | yes, we're close enough to that atm, there still is one full AST walk for bindVars that's still costly (I tried to remove it with horrors but unexplicably did not make anything faster) and general AST allocation cost (more bump allocators would help a lot there probably) | 17:22:40 |
Qyriad | In reply to @qyriad:katesiria.org don't forget that every single Nixpkgs package loads hundreds if not thousands of other nix files (and instantiates half a gazillion derivations…) | 17:22:52 |
piegames | we won't get much faster than that (though I expect that a parse that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat | 17:23:20 |
piegames | * we won't get much faster than that (though I expect that a parser that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat | 17:23:45 |
Rutile (rootile) | In reply to @piegames:flausch.social we won't get much faster than that (though I expect that a parser that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat Could the rust version be possibly be writtten with cache in mind from the beginning? | 17:25:15 |
piegames | yes, but the issue is, what is your cache key? | 17:25:37 |
piegames | inode number and ctime might be our best bet | 17:28:31 |
piegames | but in terms of cachable data structures, bytecode gives us that for free | 17:29:02 |
piegames | * but in terms of cachable data structures, bytecode gives us that bit for free | 17:29:28 |
Charles | In reply to @piegames:flausch.social yes, but the issue is, what is your cache key? What do other interpreters do? | 17:31:54 |
piegames | I don't think any other interpreters have comparable performance requirements for parsing | 17:33:30 |
piegames | if there constraints weren't as tight we could just go content-addressed and use a hash | 17:33:43 |