| 21 Aug 2023 |
@elvishjerricco:matrix.org | optane is kind of insane | 11:25:27 |
@elvishjerricco:matrix.org | there was a huge influx of supply like 6 months ago or something when intel announced they were killing the division, so I managed to pick up four 110GB nvme sticks at a quarter their normal price | 11:26:02 |
@linus:schreibt.jetzt | nice | 11:26:15 |
| 22 Aug 2023 |
@elvishjerricco:matrix.org | whoa; just did a full system update with the "new" cache storage, and while optane made the querying instant, the download bandwidth was way reduced. zpool iostat 1 showed a pretty steady read bandwidth of ~80MB/s but I wasn't getting anywhere near that over the wire; whereas on the SSD I saturated the gigabit easily | 05:30:57 |
@elvishjerricco:matrix.org | (also this pool can easily do sequential reads upwards of 500MB/s) | 05:32:30 |
@linus:schreibt.jetzt | huh | 06:38:37 |
@linus:schreibt.jetzt | did you rewrite the nars as well? | 06:38:41 |
@linus:schreibt.jetzt | I could imagine that happening if you're downloading a lot of small nars, but wouldn't expect it for big ones | 06:38:58 |
@linus:schreibt.jetzt | * I could imagine that happening if you're downloading a lot of small nars and those nars are on HDD, but wouldn't expect it for big ones | 06:39:20 |
Zhaofeng Li | In reply to @elvishjerricco:matrix.org I didn't realize accessing narinfos was such a burden (finally trying to catch up with stuff)
narinfos aren't actually individual small files, the server is doing a database query | 13:46:12 |
| 23 Aug 2023 |
| Sofi joined the room. | 00:01:06 |
@elvishjerricco:matrix.org | In reply to @linus:schreibt.jetzt did you rewrite the nars as well? yep. Did a send/recv of the whole dataset. Maybe I needed to make the small blocks property a little bigger | 04:07:56 |
@elvishjerricco:matrix.org | In reply to @zhaofeng:zhaofeng.li
(finally trying to catch up with stuff)
narinfos aren't actually individual small files, the server is doing a database query I'm still using a silly ole simple nar file cache :) | 04:08:29 |
@linus:schreibt.jetzt | Anyone else using attic and getting "errors" like this?
copying path '/nix/store/s4jqyj35hii03rs7j5n6vn7gpgp6ja81-source' from 'http://attic.geruest.sphalerite.tech:8080/magic'...
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 267 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 640 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 1122 ms
warning: error: unable to download 'http://attic.geruest.sphalerite.tech:8080/magic/nar/s4jqyj35hii03rs7j5n6vn7gpgp6ja81.nar': HTTP error 200 (curl error: Transferred a partial file); retrying in 2698 ms
| 09:12:47 |
@andreas.schraegle:helsinki-systems.de | I've seen that "error" before, but not with attic. Sadly don't remember why/when exactly, though. | 09:15:54 |
@linus:schreibt.jetzt | hm
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that does not match the previously given size.
| 09:17:00 |
@linus:schreibt.jetzt | ah, curling the URL does the same thing | 09:17:56 |
@linus:schreibt.jetzt | $ curl -v http://attic.geruest.sphalerite.tech:8080/magic/nar/ja7cry6cb9wwclhlphmffgg4fv0ky4cd.nar >/dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying [2a01:4f9:1a:f600:5650::36]:8080...
* Connected to attic.geruest.sphalerite.tech (2a01:4f9:1a:f600:5650::36) port 8080 (#0)
> GET /magic/nar/ja7cry6cb9wwclhlphmffgg4fv0ky4cd.nar HTTP/1.1
> Host: attic.geruest.sphalerite.tech:8080
> User-Agent: curl/8.1.1
> Accept: */*
>
< HTTP/1.1 200 OK
< x-attic-cache-visibility: public
< transfer-encoding: chunked
< date: Wed, 23 Aug 2023 09:17:52 GMT
<
{ [0 bytes data]
* transfer closed with outstanding read data remaining
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
| 09:18:06 |
@linus:schreibt.jetzt | I guess I'll open an attic issue | 09:18:10 |
@linus:schreibt.jetzt | oh nvm | 09:22:42 |
@linus:schreibt.jetzt | turns out the backing s3 bucket was configured wrong and the chunks were missing | 09:23:30 |
@linus:schreibt.jetzt | though attic should probably recognise that error and report it, at least in its own log ^^ | 09:23:44 |
Julien | I get a lot of « InternalServerError: The server encountered an internal error or misconfiguration. » in the middle of my attic push. Anyone here had the same issue ? | 11:05:56 |
Julien | (Usually if I relaunch the same command it will just work fine) | 11:06:29 |
@linus:schreibt.jetzt | check the atticd logs | 11:07:30 |
@linus:schreibt.jetzt | I've been having that when it fails to acquire a db connection from the pool, probably because all the connections in the pool are busy | 11:07:47 |
@linus:schreibt.jetzt | I think it has a 30s timeout | 11:07:52 |
Julien | I yes | 11:16:12 |
Julien | * Ah yes | 11:16:21 |
Julien | It looks that that’s the problem | 11:16:29 |