| 6 Apr 2026 |
| Eli Saado changed their profile picture. | 11:04:08 |
| Eli Saado changed their profile picture. | 11:05:02 |
| 7 Apr 2026 |
| @oleg20082009:matrix.org joined the room. | 21:17:34 |
| @oleg20082009:matrix.org left the room. | 21:38:35 |
| 8 Apr 2026 |
| johnhamelink joined the room. | 07:31:59 |
johnhamelink | Hey folks, I wrote a nix flake for https://crowci.dev/ (a woodpeckerCI fork). My flake uses podman quadlets using quadlet-nix. I'm coming across an issue when building that I'm hoping someone might be able to shed light on: when an agent (runner) container runs nix build, it seems to be able to surpass resource restrictions set in the quadlet configuration (using PodmanArgs). The result is that long builds get OOM killed. What I really want is for the container to be constrained to its resource requirements. My /etc/containers/systemd container configuration (generated by quadlet-nix) looks like this: https://gist.github.com/johnhamelink/80995130d2afc1cedee31b501cb3e689 | 07:51:29 |
johnhamelink | * Hey folks, I wrote a nix flake for https://crowci.dev/ (a woodpeckerCI fork). My flake uses podman quadlets using quadlet-nix. I'm coming across an issue when building that I'm hoping someone might be able to shed light on: when an agent (runner) container runs nix build, it seems to be able to surpass resource restrictions set in the quadlet configuration (using PodmanArgs). The result is that long builds get OOM killed. What I really want is for the container to be constrained to its resource requirements. My /etc/containers/systemd container configuration (generated by quadlet-nix) looks like this: https://gist.github.com/johnhamelink/80995130d2afc1cedee31b501cb3e689
My nix flake is here in case you are interested https://codefloe.com/crowci/crowci-flake
| 07:52:40 |
johnhamelink | With the above container configuration, you can see here that the nix process run by conmon bursts right past 2G of memory: | 08:50:29 |
johnhamelink |  Download screenshot-20260408-09:48:09.png | 08:50:32 |
johnhamelink | * With the above container configuration, you can see here that the nix process run by conmon bursts right past 2G of memory (PID 133965): | 08:51:09 |
johnhamelink |  Download screenshot-20260408-09:51:29.png | 08:52:01 |
johnhamelink | Meanwhile podman stats shows only 17-19MB of memory usage | 08:52:12 |
johnhamelink | OK! I figured it out: The agent container uses the docker.socket to spin up its own containers - which is why the nix build process isn't a direct child of the container - and that container wasn't receiving the resource limitation. The authors thought ahead and added configuration for this, which when applied kills the container when it reaches the limit. Now I just need to figure out how to throttle the process instead of kill it outright | 10:35:03 |
johnhamelink | * OK! I figured it out: The agent container uses the docker.socket to spin up its own containers - which is why the nix build process isn't a direct child of the container - and that container wasn't receiving the resource limitation. The authors thought ahead and added configuration for this, which when applied kills the container when it reaches the limit. Now I just need to figure out how to throttle the spawned container instead of kill it outright | 10:35:27 |
johnhamelink | I was able to resolve the ram problem with zramSwap.enable = true; Problem solved :) | 12:12:08 |
jaredmontoya | Does anyone know what to do if promtail is gone?
My use case includes using promtail to scrape journald on a 1GB ram raspberry pi. promtail used 23-32MB of RAM but the supposed alternatives (both grafana alloy and fluent-bit) use more than 600MB of RAM | 12:14:25 |
jaredmontoya | and I can't give up 60% of my raspberry pi ram just to send it's logs to loki | 12:14:54 |
goeranh | maybe just https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-remote.html, or rsyslog? | 12:19:11 |
Sandro 🐧 | I wanted to look into victoria metrics because it is supposed to be fast and memory efficient | 13:55:45 |
Sandro 🐧 | so opposite of opensearch | 13:55:54 |
| aparna changed their profile picture. | 14:20:59 |
| teumaauss joined the room. | 16:19:45 |
noradtux | uhh, victria logs .. interesting | 16:57:42 |
noradtux | uhh, victoria logs .. interesting | 16:57:57 |
magic_rb | Im relatively happy with postgres, it get very big but thats primarily cause i havent configured any resampling | 17:57:32 |
noradtux | I currently use graylog to collect logs from .. everything. But that is sooooo ressource heavy | 17:59:37 |
noradtux | I currently use graylog to collect syslog from .. everything. But that is sooooo ressource heavy | 17:59:56 |
magic_rb | Telegraf + postgres here, works okay | 18:01:44 |
magic_rb | I dont notice it running. But i also have 2 cpus and 64gb of memory | 18:02:01 |
magic_rb | I can check the memory use when i get to my laptop later today | 18:02:36 |