| 26 Nov 2022 |
@linus:schreibt.jetzt | Ooh you run it all in a container? | 19:25:05 |
magic_rb |
which may be a reasonable thing to do, but itll increase complexity a lot | 19:25:12 |
magic_rb |
yeah i do | 19:25:14 |
@linus:schreibt.jetzt | What about limiting nix-daemon within the container? | 19:25:30 |
magic_rb |
nomad would still have issues with the ram amounts, i would have to keep memory down to 4GB then increase memory max above 8 to 12 which is a really bad idea because nomad schedules based on memory not memory max | 19:26:30 |
magic_rb |
i may just have to split it up, nix-daemon, hydra-http and hydra-evaluator | 19:26:52 |
@linus:schreibt.jetzt | Oh yeah if you're running in nomad that sounds sensibel | 19:27:08 |
@linus:schreibt.jetzt | * Oh yeah if you're running in nomad that sounds sensible | 19:27:14 |
magic_rb |
i dont have space for swap anywhere currently unfortunately, when i clean up the 1TB skyhawks i can put it there but i need to move data of them onto the 4tb wd reds | 19:28:19 |
magic_rb |
i would still like to somehow make hydra not eval everything at once, but i would have to know a lot more c++ to do that | 19:29:08 |
@linus:schreibt.jetzt | If you're on nixos, I'd suggest trying zramSwap.enable = true in any case | 19:29:27 |
@linus:schreibt.jetzt | In reply to @magic_rb:matrix.redalder.org
i would still like to somehow make hydra not eval everything at once, but i would have to know a lot more c++ to do that Or make separate jobsets :p | 19:29:39 |
magic_rb |
for flakes? how would that work | 19:29:51 |
magic_rb |
In reply to
Linux Hackerman
If you're on nixos, I'd suggest trying zramSwap.enable = true in any case
yep, ill do that in any case, seems like a reasonable idea | 19:30:05 |
@linus:schreibt.jetzt | Oh | 19:30:06 |
@linus:schreibt.jetzt | With different subpaths I guess | 19:30:12 |
magic_rb |
well when i push a change hydra will start evaluating everything at once anyway, it has no notion of "hey maybe evaluations can choke the system too" | 19:30:56 |
@linus:schreibt.jetzt | Afaik the evaluator will only evaluate one jobset at a time | 19:31:33 |
magic_rb |
ah ok so that may be a path forward | 19:31:45 |
| @omlet:matrix.org left the room. | 19:33:52 |
@linus:schreibt.jetzt | In reply to @magic_rb:matrix.redalder.org
ah ok so that may be a path forward update: I was wrong, by default it does up to 4 evals at a time. But you can configure that using the max_concurrent_evals option. | 19:59:48 |
das_j | there are magic environment variables that may or may not fix that | 21:00:08 |
das_j | # evaluator_initial_heap_size is basically equivalent to GC_INITIAL_HEAP_SIZE and defaults to 384MiB
# evaluator_max_memory_size is… "If our RSS exceeds the maximum, exit. The master will start a new process.". defaults to 4096 (I assume MiB)
# evaluator_workers is how many parallel evaluator workers are used. defaults to 1
# max_concurrent_evals defines how many evaluations are run concurrently. defaults to 4
extraConfig = ''
email_notification = 1
evaluator_max_memory_size = 4096
evaluator_initial_heap_size = ${toString (2 * 1024 * 1024 * 1024)}
max_output_size = ${toString (16 * 1024 * 1024 * 1024)}
max_concurrent_evals = 3
evaluator_workers = 4
| 21:00:22 |
das_j | (all of these are guisses) | 21:00:26 |
das_j | * (all of these are guesses) | 21:00:29 |
das_j | * there are magic settings that may or may not fix that | 21:00:50 |
| 28 Nov 2022 |
| h7x4 changed their profile picture. | 20:05:25 |
| 30 Nov 2022 |
| R̴̨͕͇͍̞̮̐̅͆̌̀̉̐͋̈́̃̀͒́̎̅̚̚̚͠͝Ĕ̵̡̛͖͖̟̙̫̱͈̘̞̭͍͍͑̌̄͑̓̋̓̀̈̏̈́͊̇͊͆̉͂̏̀̃̚͘͝͝ͅͅD̶̡̢͔̱̖̮͙͉̘̺͓͍̩̮͈͍͗̃̀̏͌͘͜ͅŚ̸̬̭̯̬͙͇͓̬̩̳̤͚͓̤̩̺͉͖̉͛̓̿̎͊̿̆́̐͂̇͌̄̇̓͘ͅͅT̴̞̫̘̝͇͔̟̪̪̦͂̔̎̀̎ͅŎ̷̡̬̹̪͈̭̣͈̭̭͉̦̖̝̘̪͖͔̥̦̘̻̳Ṋ̶̛̫͈̳̘͚̜̔̋͆̅̈́͊̑͊̉̌̈́̾͑̈́̚ͅË̸̡̨̨̛͇̜̖͔͖̻̟̗̠̙͓̘̗̥͉͇̜͑͆͊͑͑̀̓͒͜͝͝ joined the room. | 04:59:15 |
| Florian | W3F left the room. | 13:12:16 |
| 1 Dec 2022 |
jackdk | I am setting up a Hydra instance, and I would like the jobs that produce deployment artifacts to not trigger deploys unless all checks have passed. It is easy enough to add all the checks to the buildInputs of the artifact jobs, but that causes a lot of spurious rebuilding. Is there a better way to achieve what I'm trying to do? | 00:29:14 |