| 6 Dec 2025 |
Jassuko | In reply to @aloisw:julia0815.de I expect rollback journal to be disastrously slow with the Lix store database, as it does at least one fsync per commit. You can try out yourself with the use-sqlite-wal setting if you want. How many write operations are needed per a build or other store operations? Or do you need a write per some file in derivation? Or what exactly? | 14:35:48 |
aloisw | From what I gathered during my investigations it is roughly 1 write transaction per derivation. | 14:36:21 |
Jassuko | But that is practically nothing..? | 14:36:43 |
aloisw | Uh what? To the contrary it is quite a lot of transactions. The huge volume of tiny transactions is also why there is so much amplification, I think. | 14:37:31 |
Jassuko | Like… you are already writing a bunch of files for each derivation… having to do a SQLite write should be rather trivial in that scope? | 14:37:34 |
Jassuko | Or does it have some weird schema or indexes that are absurdly slow to update..? | 14:38:34 |
raitobezarius | FS writes doesn't have the same performance penalty as a SQLite writes | 14:38:52 |
aloisw | It is not just one write, it is one write transaction where it adds the derivation and its references to the database, plus probably some index updates. And it does not appear "absurdly slow" in general, the problem is the extreme write amplification. | 14:39:58 |
Jassuko | In reply to @raitobezarius:matrix.org FS writes doesn't have the same performance penalty as a SQLite writes On the contrary… properly used the SQLite can exceed the performance of writing same amount of small files to the plain FS… | 14:40:30 |
aloisw | Key word is "can" here, it all depends on the actual use case. | 14:41:18 |
Jassuko | Yes. So I guess we should figure out how to make it fast. :p | 14:41:51 |
522 it/its ⛯ΘΔ | https://sqlite.org/fasterthanfs.html#write_performance_measurements goes into it | 14:43:27 |
522 it/its ⛯ΘΔ | write perf for sqlite writes in a single transaction vs the filesystem (with no fsync) is pretty much identical on linux | 14:43:59 |
aloisw | Well yes, https://gerrit.lix.systems/c/lix/+/4711 and https://gerrit.lix.systems/c/lix/+/4712 are attempts to tweak SQLite settings to make it go a lot faster | 14:45:02 |
Jassuko | 100s to 1000s of writes per second is where SQLite should start to get painful in general on many use cases. Anything below that, it should just work, or be easily fixable to perform. | 14:45:09 |
aloisw | With the default page size it spent like 40% of the time checkpointing. | 14:45:48 |
522 it/its ⛯ΘΔ | also yeah, the benchmark linked doesn't include checkpointing | 14:46:14 |
aloisw | "No checkpoint" and "huge blob" are about the opposite of what Lix is doing. | 14:47:33 |
kuruczgy | Do any of the nix hash commands have a way to ignore certain files/dirs? In particular I have to do this to hash a git repo:
mkdir /tmp/wt && git worktree add /tmp/wt HEAD && rm /tmp/wt/.git && nix hash path /tmp/wt Is there some way to hash a git tree without having to copy it? (Possibly something that doesn't even look at the worktree, just the git objects.) | 14:48:57 |
K900 | nix-prefetch-git? | 14:49:35 |
aloisw | That copies, right? | 14:49:52 |
kuruczgy | Does that not copy the repo into some temporary worktree too? | 14:49:54 |
aloisw | I guess you could point it to a store on tmpfs and it at least wouldn't copy to the disk. | 14:50:39 |
kuruczgy | (In particular I probably want to avoid copying anything to my store, I have a complicated FOD to put together the source tree, and I want to check that it comes out identical to what I have checked out in my working tree.) | 14:52:08 |
aloisw | Right, but "100s to 1000s of writes per second" is pretty much exactly what Lix does. | 14:52:12 |
522 it/its ⛯ΘΔ | in a transaction per write? | 14:55:01 |
aloisw | No, one transaction per derivation I think. | 14:55:38 |
KFears (burnt out) | That's on a very high load, no? My desktop closure size is around 17k drvs, and another 17k for FODs | 14:55:44 |
aloisw | For reference, evaluating my system configuration into a fresh store takes about 85 seconds including downloads and builds for IFD, and creates about 350k pages of WAL (with page_size = 512, haven't checked with 4096 but probably about the same). | 14:56:50 |
hexa | loving the implications for build farms | 14:57:30 |