| 24 Aug 2023 |
flokli | Rust in kernel VFS when | 19:43:10 |
raitobezarius | In reply to @flokli:matrix.org Rust in kernel VFS when :DDDDDDDDDDDDDDDDDD | 19:43:21 |
Zhaofeng Li | In reply to @flokli:matrix.org This is already the case actually - I don't realize the actual files on disk currently, you can look at them through fuse Very cool, didn't know that! | 19:44:00 |
raitobezarius | https://cs.tvl.fyi/depot/-/blob/tvix/store/src/fuse/mod.rs | 19:44:23 |
raitobezarius | if I'm not wrong | 19:44:45 |
flokli | It's all a bit wip, there's no seek support, because I want to do this in concert with verified streaming | 19:45:12 |
flokli | But it should happen all in the next months | 19:45:30 |
flokli | But it should already be good enough for builds | 19:46:51 |
flokli | I'll talk about it during nixcom | 19:47:24 |
flokli | * I'll talk about it during nixcon | 19:47:32 |
BMG | In reply to @flokli:matrix.org I'll talk about it during nixcon You giving a scheduled talk? | 20:10:11 |
raitobezarius | a smol talk about a certain reimplementation of Nix | 20:10:26 |
BMG | Can always chat about it anyway over a few 🍻 | 20:10:37 |
flokli | Smalltalk | 20:10:37 |
raitobezarius | something something about putting the oxide inside the nix | 20:10:45 |
raitobezarius | ok let's push this schedule | 20:10:56 |
raitobezarius | * ok let's push this (NixCon) schedule | 20:11:00 |
@elvishjerricco:matrix.org | Ok I redid my dumb nar cache and zfs dataset. I made a spreadsheet of all the nar file sizes and found that nars <= 512KiB account for 1.2% of the data, but 80% of the files. So I rsync'd the data to a new dataset so that I could set recordsize=1M and special_small_blocks=512K. Now there's 7GiB on the special optane vdev and the rest of the 371GiB is on the HDDs, and it's fast as hell | 23:50:10 |
raitobezarius | send us statistics | 23:50:22 |
@elvishjerricco:matrix.org | what are you interested in? | 23:50:44 |
raitobezarius | if you run a heavy duty transfer, IOPS/throughput? | 23:51:16 |
raitobezarius | at the ZFS level | 23:51:21 |
raitobezarius | but also at the application level | 23:51:28 |
raitobezarius | if you can grab those | 23:51:32 |
@elvishjerricco:matrix.org | sure, I'll zpool iostat <pool> 1 | tee stats.log while I do a nix copy of a big closure | 23:52:37 |
raitobezarius | very cool | 23:54:42 |
@elvishjerricco:matrix.org | Nothing to impressive; I'm only using a gigabit network after all:
capacity operations bandwidth
pool alloc free read write read write
---------------- ----- ----- ----- ----- ----- -----
wrenn-mirrorpool 5.83T 5.18T 81 122 1.08M 3.59M
wrenn-mirrorpool 5.83T 5.18T 581 0 90.6M 0
wrenn-mirrorpool 5.83T 5.18T 705 0 97.0M 0
wrenn-mirrorpool 5.83T 5.18T 647 0 111M 0
wrenn-mirrorpool 5.83T 5.18T 650 0 132M 0
wrenn-mirrorpool 5.83T 5.18T 392 238 83.9M 3.71M
wrenn-mirrorpool 5.83T 5.18T 687 0 107M 0
wrenn-mirrorpool 5.83T 5.18T 680 0 96.4M 0
wrenn-mirrorpool 5.83T 5.18T 687 0 113M 0
wrenn-mirrorpool 5.83T 5.18T 582 0 110M 0
wrenn-mirrorpool 5.83T 5.18T 539 234 101M 3.71M
wrenn-mirrorpool 5.83T 5.18T 585 0 141M 0
wrenn-mirrorpool 5.83T 5.18T 513 0 108M 0
wrenn-mirrorpool 5.83T 5.18T 559 0 112M 0
wrenn-mirrorpool 5.83T 5.18T 581 0 106M 0
wrenn-mirrorpool 5.83T 5.18T 567 235 111M 3.71M
wrenn-mirrorpool 5.83T 5.18T 473 0 93.9M 0
wrenn-mirrorpool 5.83T 5.18T 539 0 135M 0
wrenn-mirrorpool 5.83T 5.18T 507 0 105M 0
wrenn-mirrorpool 5.83T 5.18T 568 0 115M 0
wrenn-mirrorpool 5.83T 5.18T 472 236 102M 3.71M
wrenn-mirrorpool 5.83T 5.18T 472 0 107M 0
| 23:55:09 |
@elvishjerricco:matrix.org | but I'm clearly maxing out that gigabit | 23:55:31 |
@elvishjerricco:matrix.org | which is all I wanted :) | 23:55:36 |
raitobezarius | delicious | 23:55:52 |