Hydra | 391 Members | |
| 109 Servers |
| Sender | Message | Time |
|---|---|---|
| 10 Apr 2022 | ||
| hmm, maybe not actually an impurity, as the same -A foo will always lead to the same result, it's just that hydra will pass different -A values... | 14:17:08 | |
| you make a good point, depending on actual branches names is annoyingly brittle. passing in a tag (or attribute name) would work regardless of git branch and also support evaluation if the input isn't being fetched from git | 14:18:43 | |
| that's just because you only intend to turn some jobs on/off, but you COULD make that substantially change behavior and users would have no idea. And the behavior of the same commit will change over time. | 14:18:57 | |
| i think the .version or .branch file is the easiest way to smuggle the branch name in | 14:19:35 | |
| you can make your push-to-deployment only depend on the job that matters, not the entire jobset | 14:25:24 | |
| okay, you've convinced me that source control information should never reach the build, so instead I'd need to communicate a tag/mode/variant to the jobset | 14:26:11 | |
| adding that file to the repo would work, of course, but then you lose the ability of simply pushing the staging branch to the production branch whenever it's ready | 14:27:59 | |
| who determines whether #1143 has a chance of being merged? | 14:28:28 | |
| (I'm happy to run a patched hydra, but if the answer is a "this will never be accepted", I'll need to keep looking for a different way to make this work) | 14:29:11 | |
| wait..... if you have a flow where you push from staging branch to production.... the builds should already be cached | 14:33:08 | |
| if you built on staging..... .cache it.... then production builds are instant | 14:33:27 | |
| true, and in a way it makes sense to test if production builds before allowing it for staging, but staging pushes are going to be much more frequent than production pushes, so that's still a lot of wasted builds | 14:37:42 | |
| * true, and in a way it makes sense to test if production builds before allowing it for staging, but staging pushes are going to be much more frequent than production pushes, so that's still a lot of wasted time, waiting for production builds to finish, just to deploy staging | 14:39:30 | |
| depend on the job to complete, not the jobset | 14:41:49 | |
| does hydra allow job prioritisation? if I could make sure the staging variant always builds first, and then rely on the production build being cached from a previous evaluation, that might work | 14:43:44 | |
| ahh, there's schedulingPriority | 14:45:17 | |
| 14:45:38 | |
| what might be accepted upstream is to specify only particular jobs to be built. Something like "in flake.nix build hydraJobs.production" | 14:48:52 | |
| (versus, "build every hydraJob in the flake") | 14:54:50 | |
| that's a slightly restricted version of https://github.com/NixOS/hydra/pull/1143, so maybe that'll be accepted as well. and graham's reaction in #1154 didn't seem opposed to the feature | 14:56:13 | |
| Yup | 14:57:00 | |
| As we get closer to a 3.0 release with stable flakes, hydra support will improve. | 14:57:31 | |
| Eg: better input tracking exposed, or lockless eval, attribute builds, and so forth. | 14:58:17 | |
| Lockless eval (with configurability over which inputs to unlock) sounds nice, though very at-odds with the current extremely hermetic evaluation. Thank you very much for helping me narrow down my options, tomberek, and thank you cransom for describing how you do automatic deployments. I appreciate talking about this issue with someone else, after guessing about for a few days :) | 15:00:41 | |
| 15:01:42 | ||
| another possibility.... what is the thing that watches the jobsets and triggers a deployment to prod? | 15:02:57 | |
In reply to @tomberek:matrix.orgfor now, it's just a small systemd timer that polls the most recent job output path with curl, but if this setup proves itself, I'm up for switching that for hydra's (dynamic) RunCommand plugin, or even listening for job completions with postgres listen/notify | 15:08:43 | |
| i'd like to do some things like automatically follow nixos-21.11/release in my deploy jobs, except we (our app code, not nixpkgs) don't have adequate testing to make that an automatic feature. unattended production deployments without a good test gate isn't something i'll do. | 16:30:37 | |
| the app code between prod/staging doesn't change, but that's a different beast when you get to doing deployments. there's nothing i can cache between staging/production because i deploy fully baked disk images into aws. if it was a generic disk image with different variables pulled in from aws metadata/user-data calls, that's possible. | 16:32:39 | |
| yes, I wouldn't be brave enough to do completely unattended deployments with an unlocked nixpkgs input. at least with application inputs, someone pushed the commit to the application repo that triggered the deployment, but changes to nixpkgs happen all day, even at night, and there'd be no way to react to a broken deployment :c | 16:40:54 | |