| 6 Dec 2025 |
aloisw | Key word is "can" here, it all depends on the actual use case. | 14:41:18 |
Jassuko | Yes. So I guess we should figure out how to make it fast. :p | 14:41:51 |
522 it/its ⛯ΘΔ | https://sqlite.org/fasterthanfs.html#write_performance_measurements goes into it | 14:43:27 |
522 it/its ⛯ΘΔ | write perf for sqlite writes in a single transaction vs the filesystem (with no fsync) is pretty much identical on linux | 14:43:59 |
aloisw | Well yes, https://gerrit.lix.systems/c/lix/+/4711 and https://gerrit.lix.systems/c/lix/+/4712 are attempts to tweak SQLite settings to make it go a lot faster | 14:45:02 |
Jassuko | 100s to 1000s of writes per second is where SQLite should start to get painful in general on many use cases. Anything below that, it should just work, or be easily fixable to perform. | 14:45:09 |
aloisw | With the default page size it spent like 40% of the time checkpointing. | 14:45:48 |
522 it/its ⛯ΘΔ | also yeah, the benchmark linked doesn't include checkpointing | 14:46:14 |
aloisw | "No checkpoint" and "huge blob" are about the opposite of what Lix is doing. | 14:47:33 |
kuruczgy | Do any of the nix hash commands have a way to ignore certain files/dirs? In particular I have to do this to hash a git repo:
mkdir /tmp/wt && git worktree add /tmp/wt HEAD && rm /tmp/wt/.git && nix hash path /tmp/wt Is there some way to hash a git tree without having to copy it? (Possibly something that doesn't even look at the worktree, just the git objects.) | 14:48:57 |
K900 | nix-prefetch-git? | 14:49:35 |
aloisw | That copies, right? | 14:49:52 |
kuruczgy | Does that not copy the repo into some temporary worktree too? | 14:49:54 |
aloisw | I guess you could point it to a store on tmpfs and it at least wouldn't copy to the disk. | 14:50:39 |
kuruczgy | (In particular I probably want to avoid copying anything to my store, I have a complicated FOD to put together the source tree, and I want to check that it comes out identical to what I have checked out in my working tree.) | 14:52:08 |
aloisw | Right, but "100s to 1000s of writes per second" is pretty much exactly what Lix does. | 14:52:12 |
522 it/its ⛯ΘΔ | in a transaction per write? | 14:55:01 |
aloisw | No, one transaction per derivation I think. | 14:55:38 |
KFears (burnt out) | That's on a very high load, no? My desktop closure size is around 17k drvs, and another 17k for FODs | 14:55:44 |
aloisw | For reference, evaluating my system configuration into a fresh store takes about 85 seconds including downloads and builds for IFD, and creates about 350k pages of WAL (with page_size = 512, haven't checked with 4096 but probably about the same). | 14:56:50 |
hexa | loving the implications for build farms | 14:57:30 |
aloisw | Yes, that's exactly why it is so slow and we're having this conversation. | 14:57:32 |
hexa | thanks for looking int o that | 14:57:36 |
hexa | * thanks for looking into that | 14:57:39 |
aloisw | You eval on tmpfs, right? | 14:58:07 |
hexa | not sure what "eval on tmpfs" means | 14:59:04 |
aloisw | That the store that the machine performing the evaluation writes to is located on tmpfs. | 14:59:39 |
hexa | oh, you mean a non-default store then | 15:00:05 |
hexa | no | 15:00:19 |
hexa | * that'd be a no | 15:00:24 |