!RROtHmAaQIkiJzJZZE:nixos.org

NixOS Infrastructure

397 Members
Next Infra call: 2024-07-11, 18:00 CEST (UTC+2) | Infra operational issues backlog: https://github.com/orgs/NixOS/projects/52 | See #infra-alerts:nixos.org for real time alerts from Prometheus.120 Servers

Load older messages


SenderMessageTime
23 Feb 2026
@hexa:lossy.networkhexaso likely an issue with the M.2 slot on the other board13:08:31
@hexa:lossy.networkhexakudos to hetzner for being super easy to work with on this machine13:09:12
@sinan:sinanmohd.comsinan changed their profile picture.13:20:05
@sinan:sinanmohd.comsinan 13:20:13
@sinan:sinanmohd.comsinan 13:20:24
@sinan:sinanmohd.comsinan 13:21:02
@vcunat:matrix.orgVladimír ČunátThanks for dealing with all this.13:23:36
@hexa:lossy.networkhexa
Product 	previous price 	New price as of 1 April 2026
CAX11 (HEL1) 	€ 3.29 	€ 4.49
CPX42 (NBG1) 	€ 19.49 	€ 25.49
CX42 (HEL1) 	€ 15.90 	€ 20.99
Object Storage (additional storage per 1 TB-hour) (NBG1) 	€ 0.0067 	€ 0.0087
Object Storage (monthly base price) (NBG1) 	€ 4.99 	€ 6.49
AX51-NVMe (FSN1) 	€ 63.10 	€ 64.99
AX101 (HEL1) 	€ 101.60 	€ 104.65
Mac Mini M1 (FSN1) 	€ 52.10 	€ 53.66
RX170 (HEL1) 	€ 167.30 	€ 172.32
RX220 (HEL1) 	€ 217.30 	€ 223.82
AX102 (FSN1) 	€ 107.30 	€ 122.30
EX44 (HEL1) 	€ 37.30 	€ 42.30
AX162-R (FSN1) 	€ 207.30 	€ 242.30
13:24:28
@hexa:lossy.networkhexahttps://docs.google.com/spreadsheets/d/1f3tdqcovFXO36aqRplk__XApIPF0BAMwdxzvF8cy9MI/edit?usp=sharing13:33:02
@hexa:lossy.networkhexa~165 EUR increase13:33:11
@tom:dragar.deTomOr in other words: nearly four AX41-NVME in additional costs per month 😥14:11:08
24 Feb 2026
@cdepillabout:matrix.orgcdepillabout left the room.07:39:17
@k900:0upti.meK900Going to bonk the next unstable-small eval which should be any minute now12:13:11
25 Feb 2026
@tfc:matrix.orgtfcDanke Merz10:24:27
@hexa:lossy.networkhexa For once, actually not his fault 10:26:31
@jkarlson:kapsi.fiEmil Thorsøeis it Schultz?10:31:06
@jkarlson:kapsi.fiEmil ThorsøeObama?10:31:09
@jkarlson:kapsi.fiEmil Thorsøegotta be Obama10:31:13
@lassulus:lassul.uslassulussam altman12:26:54
@isabel:isabelroses.comisabel changed their profile picture.21:51:38
26 Feb 2026
@lily:lily.flowersLily Foster changed their profile picture.14:01:10
27 Feb 2026
@amadaluzia:tchncs.deamadaluzia[tde] changed their profile picture.03:54:05
@jfly:matrix.orgJeremy Fleischman (jfly) do we have any alerts to tell us if zfs scrub failed or if a zfs pool is degraded? the only zfs specific alert i see is for pool free space 07:06:46
@jfly:matrix.orgJeremy Fleischman (jfly) lol nevermind, i see we just added a unhealthy check: https://github.com/NixOS/infra/commit/1c46bbda28fe056dd4a4b9a0c95e5602fdf5f738 07:21:28
@jfly:matrix.orgJeremy Fleischman (jfly)

but i did play with this a bit, and i see that a pool that fails scrub stills shows up as ONLINE:

$ sudo zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:00 with 3 errors on Thu Feb 26 23:18:53 2026
config:

	NAME                            STATE     READ WRITE CKSUM
	tank                            ONLINE       0     0     0
	  /home/jeremy/tmp/fsing/disk1  ONLINE       0     0     8

errors: Permanent errors have been detected in the following files:

        /tank/hello.txt
07:23:03
@jfly:matrix.orgJeremy Fleischman (jfly)seems like the prometheus zfs exporter we use doesn't export scrub status: https://github.com/pdf/zfs_exporter/issues/20, which was closed in favor of https://github.com/pdf/zfs_exporter/issues/5, which seems to be blocked on added support for the new zfs cli json output07:35:27
@k900:0upti.meK900I am killing all builds on all non-staging non-darwin branches right now13:02:54
@k900:0upti.meK900Because of /nix/store/j4ra5i3f9x6bk3y6aq6ma17z1hlqr18d-nixos-system-konata-26.05.20260227.bde6ce613:02:57
@k900:0upti.meK900* Because of https://lore.kernel.org/all/bb9ab61c-3bed-4c3d-baf0-0bce4e142292@moonlit-rail.com/13:03:05
@k900:0upti.meK900 @vcunat can you pause the jobsets so we don't get any new evals 13:03:51

Show newer messages


Back to Room ListRoom Version: 6