!CcTBuBritXGywOEGWJ:matrix.org

NixOS Binary Cache Self-Hosting

159 Members
About how to host a very large-scale binary cache and more55 Servers

Load older messages


SenderMessageTime
21 Aug 2023
@linus:schreibt.jetzt@linus:schreibt.jetzt
In reply to @elvishjerricco:matrix.org
you can't use raidz on special I'm pretty sure
ah yeah
09:07:03
@elvishjerricco:matrix.org@elvishjerricco:matrix.orgI've only got two of these optanes in here, so my redundancy level is sort of different. But it's optane, so it should resilver fast in the event of a failure, and they should be durable as hell. So it's not too big a worry09:08:00
@linus:schreibt.jetzt@linus:schreibt.jetztyeah, and losing a binary cache isn't the end of the world I think?09:08:31
@linus:schreibt.jetzt@linus:schreibt.jetztand you'd have backups of any more important stuff, right? ;)09:08:43
@elvishjerricco:matrix.org@elvishjerricco:matrix.orgright; granted there's other stuff I like on this pool but none of it matters09:08:45
@elvishjerricco:matrix.org@elvishjerricco:matrix.organd yes :P09:08:53
@elvishjerricco:matrix.org@elvishjerricco:matrix.orgother machine has a single very big hard drive as a send/recv target09:09:28
@linus:schreibt.jetzt@linus:schreibt.jetzt
In reply to @julienmalka:matrix.org
On my own attic deployment I get 0.17052771681498935
you can also replace file_size with chunk_size to see the ratio for dedup-only (without compression)
09:09:47
@linus:schreibt.jetzt@linus:schreibt.jetztwhich is like 47% for me09:09:55
@linus:schreibt.jetzt@linus:schreibt.jetzt * which is like 45% for me09:10:05
@julienmalka:matrix.orgJulien
In reply to @linus:schreibt.jetzt
which is like 45% for me
Yeah 43%
09:11:33
@julienmalka:matrix.orgJulienSo this is still a big time storage economy09:12:15
@elvishjerricco:matrix.org@elvishjerricco:matrix.org

Linux Hackerman: Oh wow yea. Tested out copying a big closure from the cache and the "querying path" stuff was very noticeably stop-and-go and took a good amount of time.

I've now added optane special metadata and migrated the data with a send/recv to and from the same pool. I used a completely different closure to make sure it wasn't just remembering the cache hit, and that querying part is effectively instant now

11:24:55
@linus:schreibt.jetzt@linus:schreibt.jetzt \o/ 11:25:09
@elvishjerricco:matrix.org@elvishjerricco:matrix.orgoptane is kind of insane11:25:27
@elvishjerricco:matrix.org@elvishjerricco:matrix.orgthere was a huge influx of supply like 6 months ago or something when intel announced they were killing the division, so I managed to pick up four 110GB nvme sticks at a quarter their normal price11:26:02
@linus:schreibt.jetzt@linus:schreibt.jetztnice11:26:15
22 Aug 2023
@elvishjerricco:matrix.org@elvishjerricco:matrix.org whoa; just did a full system update with the "new" cache storage, and while optane made the querying instant, the download bandwidth was way reduced. zpool iostat 1 showed a pretty steady read bandwidth of ~80MB/s but I wasn't getting anywhere near that over the wire; whereas on the SSD I saturated the gigabit easily 05:30:57
@elvishjerricco:matrix.org@elvishjerricco:matrix.org(also this pool can easily do sequential reads upwards of 500MB/s)05:32:30
@linus:schreibt.jetzt@linus:schreibt.jetzthuh06:38:37
@linus:schreibt.jetzt@linus:schreibt.jetztdid you rewrite the nars as well?06:38:41
@linus:schreibt.jetzt@linus:schreibt.jetztI could imagine that happening if you're downloading a lot of small nars, but wouldn't expect it for big ones06:38:58
@linus:schreibt.jetzt@linus:schreibt.jetzt * I could imagine that happening if you're downloading a lot of small nars and those nars are on HDD, but wouldn't expect it for big ones06:39:20
@zhaofeng:zhaofeng.liZhaofeng Li
In reply to @elvishjerricco:matrix.org
I didn't realize accessing narinfos was such a burden

(finally trying to catch up with stuff)

narinfos aren't actually individual small files, the server is doing a database query

13:46:12
23 Aug 2023
@sofo:matrix.orgSofi joined the room.00:01:06
@elvishjerricco:matrix.org@elvishjerricco:matrix.org
In reply to @linus:schreibt.jetzt
did you rewrite the nars as well?
yep. Did a send/recv of the whole dataset. Maybe I needed to make the small blocks property a little bigger
04:07:56
@elvishjerricco:matrix.org@elvishjerricco:matrix.org
In reply to @zhaofeng:zhaofeng.li

(finally trying to catch up with stuff)

narinfos aren't actually individual small files, the server is doing a database query

I'm still using a silly ole simple nar file cache :)
04:08:29
@linus:schreibt.jetzt@linus:schreibt.jetzt

Anyone else using attic and getting "errors" like this?

copying path '/nix/store/s4jqyj35hii03rs7j5n6vn7gpgp6ja81-source' from 'http://attic.geruest.sphalerite.tech:8080/magic'...
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 267 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 640 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 1122 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 2698 ms
09:12:47
@andreas.schraegle:helsinki-systems.de@andreas.schraegle:helsinki-systems.deI've seen that "error" before, but not with attic. Sadly don't remember why/when exactly, though.09:15:54
@linus:schreibt.jetzt@linus:schreibt.jetzt

hm

CURLE_PARTIAL_FILE (18)

A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that does not match the previously given size. 
09:17:00

Show newer messages


Back to Room ListRoom Version: 10