| 24 Aug 2023 |
@elvishjerricco:matrix.org | Ok I redid my dumb nar cache and zfs dataset. I made a spreadsheet of all the nar file sizes and found that nars <= 512KiB account for 1.2% of the data, but 80% of the files. So I rsync'd the data to a new dataset so that I could set recordsize=1M and special_small_blocks=512K. Now there's 7GiB on the special optane vdev and the rest of the 371GiB is on the HDDs, and it's fast as hell | 23:50:10 |
raitobezarius | send us statistics | 23:50:22 |
@elvishjerricco:matrix.org | what are you interested in? | 23:50:44 |
raitobezarius | if you run a heavy duty transfer, IOPS/throughput? | 23:51:16 |
raitobezarius | at the ZFS level | 23:51:21 |
raitobezarius | but also at the application level | 23:51:28 |
raitobezarius | if you can grab those | 23:51:32 |
@elvishjerricco:matrix.org | sure, I'll zpool iostat <pool> 1 | tee stats.log while I do a nix copy of a big closure | 23:52:37 |
raitobezarius | very cool | 23:54:42 |
@elvishjerricco:matrix.org | Nothing to impressive; I'm only using a gigabit network after all:
capacity operations bandwidth
pool alloc free read write read write
---------------- ----- ----- ----- ----- ----- -----
wrenn-mirrorpool 5.83T 5.18T 81 122 1.08M 3.59M
wrenn-mirrorpool 5.83T 5.18T 581 0 90.6M 0
wrenn-mirrorpool 5.83T 5.18T 705 0 97.0M 0
wrenn-mirrorpool 5.83T 5.18T 647 0 111M 0
wrenn-mirrorpool 5.83T 5.18T 650 0 132M 0
wrenn-mirrorpool 5.83T 5.18T 392 238 83.9M 3.71M
wrenn-mirrorpool 5.83T 5.18T 687 0 107M 0
wrenn-mirrorpool 5.83T 5.18T 680 0 96.4M 0
wrenn-mirrorpool 5.83T 5.18T 687 0 113M 0
wrenn-mirrorpool 5.83T 5.18T 582 0 110M 0
wrenn-mirrorpool 5.83T 5.18T 539 234 101M 3.71M
wrenn-mirrorpool 5.83T 5.18T 585 0 141M 0
wrenn-mirrorpool 5.83T 5.18T 513 0 108M 0
wrenn-mirrorpool 5.83T 5.18T 559 0 112M 0
wrenn-mirrorpool 5.83T 5.18T 581 0 106M 0
wrenn-mirrorpool 5.83T 5.18T 567 235 111M 3.71M
wrenn-mirrorpool 5.83T 5.18T 473 0 93.9M 0
wrenn-mirrorpool 5.83T 5.18T 539 0 135M 0
wrenn-mirrorpool 5.83T 5.18T 507 0 105M 0
wrenn-mirrorpool 5.83T 5.18T 568 0 115M 0
wrenn-mirrorpool 5.83T 5.18T 472 236 102M 3.71M
wrenn-mirrorpool 5.83T 5.18T 472 0 107M 0
| 23:55:09 |
@elvishjerricco:matrix.org | but I'm clearly maxing out that gigabit | 23:55:31 |
@elvishjerricco:matrix.org | which is all I wanted :) | 23:55:36 |
raitobezarius | delicious | 23:55:52 |
@elvishjerricco:matrix.org | optane makes me happy | 23:56:36 |
raitobezarius | hehehe | 23:59:19 |
| 25 Aug 2023 |
| thomaslepoix joined the room. | 12:47:50 |
| 30 Aug 2023 |
| ajs124 joined the room. | 17:39:53 |
| @andreas.schraegle:helsinki-systems.de left the room. | 17:40:53 |
| 5 Sep 2023 |
| @rover:aguiarvieira.pt left the room. | 13:50:10 |
| 6 Sep 2023 |
| tired joined the room. | 18:37:45 |
| 8 Sep 2023 |
| @ulli:hrnz.li left the room. | 20:42:08 |
| 9 Sep 2023 |
| Moritz Sanft joined the room. | 12:15:08 |
| 21 Sep 2023 |
| dedmunwalk joined the room. | 23:06:50 |
| 24 Sep 2023 |
| mib 🥐 joined the room. | 12:21:06 |
| 27 Sep 2023 |
| mib 🥐 changed their display name from mib to mib 🥐. | 05:53:08 |
| 28 Sep 2023 |
@elvishjerricco:matrix.org | How often do you all GC your nix build machines? | 06:21:59 |
Jonas Chevalier | It depends on how much pressure there is on the disk. Typically daily, but I have seen scenarios where weekly was enough, or hourly was needed. | 07:45:04 |
| 30 Sep 2023 |
| hdhog joined the room. | 15:22:38 |
| 2 Oct 2023 |
| temp4096 joined the room. | 05:54:47 |
| 3 Oct 2023 |
| temp4096 left the room. | 10:00:16 |