!lymvtcwDJ7ZA9Npq:lix.systems

Lix Development

409 Members
(Technical) development of Lix, the package manager, a Nix implementation. Please be mindful of ongoing technical conversations in this channel.135 Servers

Load older messages


SenderMessageTime
10 Dec 2025
@qyriad:katesiria.orgQyriad
In reply to @piegames:flausch.social
Due to unfortunate design decisions made (or rather, not made) back when I was in Kindergarten, parsing is on the critical path for evaluation time
we would like to see benchmarks on this… we'd honestly sooner suspect filesystem IO as a bottleneck over parse-time
17:18:48
@qyriad:katesiria.orgQyriad
In reply to @piegames:flausch.social
Due to unfortunate design decisions made (or rather, not made) back when I was in Kindergarten, parsing is on the critical path for evaluation time
* we would like to see benchmarks/profiles on this… we'd honestly sooner suspect filesystem IO as a bottleneck over parse-time
17:19:17
@qyriad:katesiria.orgQyriad (having looked at profiles of eval before, but not in a while) 17:19:29
@piegames:flausch.socialpiegamesno, parsing is super fast, mostly thanks to horrors, it's just that Nixpkgs is 3.6MLOC over 40k files and every single NixOS eval loads a good chunk of that, realistically multiple times. The main issue is that caching was never a thought in the architecture and adding it afterwards is really tricky17:21:27
@qyriad:katesiria.orgQyriad don't forget that every single Nixpkgs package loads hundreds if not thousands of other nix files 17:22:24
@piegames:flausch.socialpiegames yes, we're close enough to that atm, there still is one full AST walk for bindVars that's still costly (I tried to remove it with horrors but unexplicably did not make anything faster) and general AST allocation cost (more bump allocators would help a lot there probably) 17:22:40
@qyriad:katesiria.orgQyriad
In reply to @qyriad:katesiria.org
don't forget that every single Nixpkgs package loads hundreds if not thousands of other nix files
(and instantiates half a gazillion derivations…)
17:22:52
@piegames:flausch.socialpiegames we won't get much faster than that (though I expect that a parse that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat 17:23:20
@piegames:flausch.socialpiegames * we won't get much faster than that (though I expect that a parser that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat 17:23:45
@commentator2.0:elia.gardenRutile (Commentator2.0) feel free to ping
In reply to @piegames:flausch.social
we won't get much faster than that (though I expect that a parser that can directly emit Bytecode should be a little bit faster still because more compact representation), but the Rust rewrite still needs to be at least as fast as now and that's no small feat
Could the rust version be possibly be writtten with cache in mind from the beginning?
17:25:15
@piegames:flausch.socialpiegamesyes, but the issue is, what is your cache key?17:25:37
@piegames:flausch.socialpiegamesinode number and ctime might be our best bet17:28:31
@piegames:flausch.socialpiegamesbut in terms of cachable data structures, bytecode gives us that for free17:29:02
@piegames:flausch.socialpiegames* but in terms of cachable data structures, bytecode gives us that bit for free17:29:28
@charles:computer.surgeryCharles
In reply to @piegames:flausch.social
yes, but the issue is, what is your cache key?
What do other interpreters do?
17:31:54
@piegames:flausch.socialpiegamesI don't think any other interpreters have comparable performance requirements for parsing17:33:30
@piegames:flausch.socialpiegamesif there constraints weren't as tight we could just go content-addressed and use a hash17:33:43
@qyriad:katesiria.orgQyriad bytecode is our::Qyriad's long term goal 17:35:18
@qyriad:katesiria.orgQyriad or really whenever someone pays us for it 17:35:25
@qyriad:katesiria.orgQyriad * or whenever someone pays us for it 17:35:36
@piegames:flausch.socialpiegamesI think browser engines have sufficient control over their baseline caching pipelines and what data changes when that they can probably just track what changes easily, but I'm guessing here17:36:08
@kfears:matrix.orgKFears (burnt out)
In reply to @piegames:flausch.social
no, parsing is super fast, mostly thanks to horrors, it's just that Nixpkgs is 3.6MLOC over 40k files and every single NixOS eval loads a good chunk of that, realistically multiple times. The main issue is that caching was never a thought in the architecture and adding it afterwards is really tricky
I don't know how many of those lines are going to be actually parsed (now I wish I could somehow learn that into), but assuming 20% utilization (that doesn't sound too unreasonable, considering that my desktop has 2k packages, but there's a lot of re-evals due to module system, splicing, package variations etc.), that doesn't sound too terrible? Like, other interpreters and compilers probably deal with 700k LOC codebases. How do they deal with that?
17:36:40
@piegames:flausch.socialpiegamesI'm dreaming of tackling it for 2.96 really17:36:58
@piegames:flausch.socialpiegamesI already have a half-working prototype from a year ago, horrors had one too, and we should soon have all the parts together for doing it17:37:37
@piegames:flausch.socialpiegamesmain question is whether to do it in C++ or to block it on RPC and do it in Rust17:37:53
@helle:tacobelllabs.nethelle (just a stray cat girl)please don't do it in C++17:38:13
@helle:tacobelllabs.nethelle (just a stray cat girl)we don't want to have to migrate that later on17:38:26
@qyriad:katesiria.orgQyriad we would either block it on RPC, or bind it to existing libexpr over C bindings. the refactoring necessary for libexpr to use a bytecode interpreter over C ABI would be like half the battle for the decoupling we need to do anyway 17:40:46
@xokdvium:matrix.orgSergei Zimmerman (xokdvium)FWIW v8 uses the source code hash as the cache key https://github.com/v8/v8/blob/427f7cce6d69a2d6ce113200e8dcc1151765058c/src/snapshot/code-serializer.cc#L820-L83417:41:09
@xokdvium:matrix.orgSergei Zimmerman (xokdvium)So the closest thing would be to hash .nix files -> cached bytecode -> cached optimizer -> .... -> profit?17:42:32

Show newer messages


Back to Room ListRoom Version: 10