!zghijEASpYQWYFzriI:nixos.org

Hydra

354 Members
99 Servers

Load older messages


SenderMessageTime
26 Nov 2022
@magic_rb:matrix.redalder.orgmagic_rb

nomad would still have issues with the ram amounts, i would have to keep memory down to 4GB then increase memory max above 8 to 12 which is a really bad idea because nomad schedules based on memory not memory max

19:26:30
@magic_rb:matrix.redalder.orgmagic_rb

i may just have to split it up, nix-daemon, hydra-http and hydra-evaluator

19:26:52
@linus:schreibt.jetzt@linus:schreibt.jetztOh yeah if you're running in nomad that sounds sensibel19:27:08
@linus:schreibt.jetzt@linus:schreibt.jetzt* Oh yeah if you're running in nomad that sounds sensible19:27:14
@magic_rb:matrix.redalder.orgmagic_rb

i dont have space for swap anywhere currently unfortunately, when i clean up the 1TB skyhawks i can put it there but i need to move data of them onto the 4tb wd reds

19:28:19
@magic_rb:matrix.redalder.orgmagic_rb

i would still like to somehow make hydra not eval everything at once, but i would have to know a lot more c++ to do that

19:29:08
@linus:schreibt.jetzt@linus:schreibt.jetztIf you're on nixos, I'd suggest trying zramSwap.enable = true in any case19:29:27
@linus:schreibt.jetzt@linus:schreibt.jetzt
In reply to @magic_rb:matrix.redalder.org

i would still like to somehow make hydra not eval everything at once, but i would have to know a lot more c++ to do that

Or make separate jobsets :p
19:29:39
@magic_rb:matrix.redalder.orgmagic_rb

for flakes? how would that work

19:29:51
@magic_rb:matrix.redalder.orgmagic_rb
In reply to Linux Hackerman
If you're on nixos, I'd suggest trying zramSwap.enable = true in any case
yep, ill do that in any case, seems like a reasonable idea
19:30:05
@linus:schreibt.jetzt@linus:schreibt.jetztOh19:30:06
@linus:schreibt.jetzt@linus:schreibt.jetztWith different subpaths I guess19:30:12
@magic_rb:matrix.redalder.orgmagic_rb

well when i push a change hydra will start evaluating everything at once anyway, it has no notion of "hey maybe evaluations can choke the system too"

19:30:56
@linus:schreibt.jetzt@linus:schreibt.jetztAfaik the evaluator will only evaluate one jobset at a time19:31:33
@magic_rb:matrix.redalder.orgmagic_rb

ah ok so that may be a path forward

19:31:45
@omlet:matrix.org@omlet:matrix.org left the room.19:33:52
@linus:schreibt.jetzt@linus:schreibt.jetzt
In reply to @magic_rb:matrix.redalder.org

ah ok so that may be a path forward

update: I was wrong, by default it does up to 4 evals at a time. But you can configure that using the max_concurrent_evals option.
19:59:48
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.dethere are magic environment variables that may or may not fix that21:00:08
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.de
    # evaluator_initial_heap_size is basically equivalent to GC_INITIAL_HEAP_SIZE and defaults to 384MiB
    # evaluator_max_memory_size is… "If our RSS exceeds the maximum, exit. The master will start a new process.". defaults to 4096 (I assume MiB)
    # evaluator_workers is how many parallel evaluator workers are used. defaults to 1
    # max_concurrent_evals defines how many evaluations are run concurrently. defaults to 4
    extraConfig = ''
      email_notification = 1
      evaluator_max_memory_size = 4096
      evaluator_initial_heap_size = ${toString (2 * 1024 * 1024 * 1024)}
      max_output_size = ${toString (16 * 1024 * 1024 * 1024)}
      max_concurrent_evals = 3
      evaluator_workers = 4

21:00:22
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.de(all of these are guisses)21:00:26
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.de * (all of these are guesses)21:00:29
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.de * there are magic settings that may or may not fix that21:00:50
28 Nov 2022
@h7x4:nani.wtfh7x4 changed their profile picture.20:05:25
30 Nov 2022
@redstone-menace:matrix.orgR̴̨͕͇͍̞̮̐̅͆̌̀̉̐͋̈́̃̀͒́̎̅̚̚̚͠͝Ĕ̵̡̛͖͖̟̙̫̱͈̘̞̭͍͍͑̌̄͑̓̋̓̀̈̏̈́͊̇͊͆̉͂̏̀̃̚͘͝͝ͅͅD̶̡̢͔̱̖̮͙͉̘̺͓͍̩̮͈͍͗̃̀̏͌͘͜ͅŚ̸̬̭̯̬͙͇͓̬̩̳̤͚͓̤̩̺͉͖̉͛̓̿̎͊̿̆́̐͂̇͌̄̇̓͘ͅͅT̴̞̫̘̝͇͔̟̪̪̦͂̔̎̀̎ͅŎ̷̡̬̹̪͈̭̣͈̭̭͉̦̖̝̘̪͖͔̥̦̘̻̳Ṋ̶̛̫͈̳̘͚̜̔̋͆̅̈́͊̑͊̉̌̈́̾͑̈́̚ͅË̸̡̨̨̛͇̜̖͔͖̻̟̗̠̙͓̘̗̥͉͇̜͑͆͊͑͑̀̓͒͜͝͝ joined the room.04:59:15
@florian:web3.foundation@florian:web3.foundation left the room.13:12:16
1 Dec 2022
@jackdk:matrix.orgjackdk I am setting up a Hydra instance, and I would like the jobs that produce deployment artifacts to not trigger deploys unless all checks have passed. It is easy enough to add all the checks to the buildInputs of the artifact jobs, but that causes a lot of spurious rebuilding. Is there a better way to achieve what I'm trying to do? 00:29:14
@cole-h:matrix.org@cole-h:matrix.org You're looking for an "aggregate" jobset. There are some examples in Nixpkgs, I believe (look for release.nix or similarly-named files in nixos/). Also present in the Nix (the tool) flake, and probably the Hydra flake / repo as well. 00:40:14
@cole-h:matrix.org@cole-h:matrix.org* You're looking for an "aggregate" job. There are some examples in Nixpkgs, I believe (look for `release.nix` or similarly-named files in `nixos/`). Also present in the Nix (the tool) flake, and probably the Hydra flake / repo as well.00:40:40
@sandro:supersandro.deSandro 🐧Is it normal that my hydra queries narinfos from itself or did I mess something up?10:18:55
@janne.hess:helsinki-systems.de@janne.hess:helsinki-systems.deIt will use any substituter, including itself if configured as substituter10:20:23

Show newer messages


Back to Room ListRoom Version: 6