NixOS AWS | 65 Members | |
| 17 Servers |
| Sender | Message | Time |
|---|---|---|
| 6 May 2025 | ||
| * We use Nix + system-manager to bake reproducible Amazon Linux 2023 AMIs. There's a shell script snippet in this GitHub issue: https://github.com/aws/ec2-image-builder-roadmap/issues/110
CloudFormation templates are generated with the AWS CDK. The infrastructure code essentially:
Once the CloudFormation image import situation is improved, we'll move the non-bootstrap stuff to use NixOS disk images created with the systemd-repart helpers. This ends up being fully reproducible because Amazon Linux 2023 locks the Amazon Linux package repository version (these are globally versioned now), so any Nix dependencies like | 18:49:07 | |
| * We use Nix + system-manager to bake reproducible Amazon Linux 2023 AMIs. There's a shell script snippet in this GitHub issue: https://github.com/aws/ec2-image-builder-roadmap/issues/110
CloudFormation templates are generated with the AWS CDK. The infrastructure code essentially:
Once the CloudFormation EC2 disk image import situation is improved, we'll move the non-bootstrap stuff to use NixOS disk images created with the systemd-repart helpers. This ends up being fully reproducible because Amazon Linux 2023 locks the Amazon Linux package repository version (these are globally versioned now), so any Nix dependencies like | 18:55:50 | |
| We recently migrated to NixOS for our application servers, and have a couple more instance profiles before we'll be fully on NixOS. We're deploying Elixir applications with an in-house deployment tool, leveraging S3 for deployment coordination and as a binary cache. We build Nix paths for NixOS and applications separately, push them to the cache and write those paths to a bucket along with other metadata including the git sha. Then during deploy we pull and activate those paths (no eval). While we do build our own AMI, we also use this same path-based deployment through amazon-init to switch to the correct profile on boot. | 18:56:00 | |
| I work for a company named CalmWave. We're a US-based healthcare startup. | 18:56:36 | |
| 7 May 2025 | ||
| 00:35:19 | ||
| 15 May 2025 | ||
| * NVIDIA (though I'm on the DGX Cloud side, not the GPU side. There's some internal Nix users pushing for better NixOS NVIDIA driver + CUDA support though) | 07:33:09 | |
| 18 May 2025 | ||
| urgh: https://github.com/aws/amazon-ec2-metadata-mock/issues/234 | 12:14:29 | |
| Turnaround on most of the AWS open source stuff outside of the AWS CDK or SDKs (which have dedicated support engineers) is pretty terrible. | 17:26:05 | |
| * Turnaround on most of the AWS open source stuff outside of the AWS CDK or CLI + SDKs (which have dedicated support engineers) is pretty terrible. | 17:26:16 | |
| i am quite confused about them keeping the github actions totally community supported | 17:33:42 | |
| Most AWS teams are fairly small (maybe 5-10 people) and there's generally no distinction between engineering, QA, and devops/SRE. It's just "software development engineer" (SDE) which handles all 3 functions which they can get away with because the internal tooling is, IMO, very well done. Each organization is basically a big ensemble of a bunch of small teams (e.g. control plane team, data plane team, frontend team for AWS console stuff) as the leaves which roll up into a reporting tree. That means most teams are actually pretty lean and don't have much extra capacity for other stuff. More important areas like the AWS CLI + SDKs or AWS CDK will have additional dedicated resources beyond SDEs to handle a lot of customer interactions. | 19:30:44 | |
| I don't know if the IMDS team themselves own EC2 metadata mock. If it is, they're probably dealing with a lot of internal IMDS development work. If not, it's probably something more owned by solutions architects (e.g. most stuff under the | 19:32:34 | |
| * I don't know if the IMDS team themselves own EC2 metadata mock. If it is, they're probably dealing with a lot of internal IMDS development work. If not, it's probably something more owned by solutions architects (e.g. most stuff under the | 19:33:07 | |
| * Most AWS teams are fairly small (maybe 5-10 people) and there's generally no distinction between engineering, QA, and devops/SRE. It's just "software development engineers" (SDEs) who handles all 3 functions which they can get away with because the internal tooling is, IMO, very well done and generally pushes for better designs since you have to consider both the operational burden and testing aspects in designs. Each organization is basically a big ensemble of a bunch of small teams (e.g. control plane team, data plane team, frontend team for AWS console stuff) as the leaves which roll up into a reporting tree. That means most teams are actually pretty lean and don't have much extra capacity for other stuff. More important areas like the AWS CLI + SDKs or AWS CDK will have additional dedicated resources beyond SDEs to handle a lot of customer interactions. | 19:34:26 | |
| * Most AWS teams are fairly small (maybe 5-10 people) and there's generally no distinction between engineering, QA, and devops/SRE. It's just "software development engineers" (SDEs) who handles all 3 functions which they can get away with because the internal tooling is, IMO, very well done and generally pushes for better designs since you have to consider both the operational burden and testing aspects in designs. Each organization is basically a big ensemble of a bunch of small teams (e.g. control plane team, data plane team, frontend team for AWS console stuff) as the leaves which roll up into a reporting tree. That means most teams are actually pretty lean and don't have much extra capacity for other stuff. More important areas like the AWS CLI + SDKs or AWS CDK will have additional dedicated resources beyond SDEs to handle a lot of customer interactions (e.g. support engineers, solutions architects). | 19:35:13 | |
| Regardless, the various waves of layoffs and reshuffling of people to work on "AI" stuff has put a lot more strain on things | 19:36:30 | |
| * I don't know if the IMDS team themselves own EC2 metadata mock. If they do, they're probably dealing with a lot of internal IMDS development work. If not, it's probably something more owned by solutions architects (e.g. most stuff under the | 19:38:10 | |
| * I don't know if the IMDS team themselves own EC2 metadata mock. If they do, they're probably dealing with a lot of internal IMDS development work (it's an eternal cycle of dealing with new EC2 instance type bringups). If not, it's probably something more owned by solutions architects (e.g. most stuff under the | 19:38:42 | |
| * I don't know if the IMDS team themselves own EC2 metadata mock. If they do, they're probably dealing with a lot of internal IMDS development work (it's an eternal cycle of dealing with new EC2 instance type bring ups and fire fighting for most EC2 Nitro teams). If not, it's probably something more owned by solutions architects (e.g. most stuff under the | 19:38:57 | |
| * Most AWS teams are fairly small (maybe 5-10 people) and there's generally no distinction between engineering, QA, and devops/SRE. It's just "software development engineers" (SDEs) who handles all 3 functions which they can get away with because the internal tooling is, IMO, very well done and generally pushes for better designs since you have to consider both the operational burden and testing aspects in designs. Each organization is basically a big ensemble of a bunch of small teams (e.g. control plane team, data plane team, frontend team for AWS console stuff) as the leaves which roll up into a reporting tree. That means most teams are actually pretty lean and don't have much extra capacity for other stuff. More important areas like the AWS CLI + SDKs or AWS CDK will have additional dedicated resources beyond SDEs to handle a lot of customer interactions (e.g. support engineers, solutions architects). As a result, they rely heavily on community contributions with AWS SDEs mostly acting as reviewers. | 19:40:01 | |
| AWS also doesn't really use these kind of local mocks much internally. Integration + E2E test suites on real infrastructure is always used instead. | 19:46:54 | |
| 19 May 2025 | ||
| AKA I should come up with a fix myself :D | 08:57:12 | |
| basically. As usual per open source, if you want something done you have to do it yourself | 16:44:10 | |
| * basically. As usual in open source, if you want something done you have to do it yourself | 17:07:47 | |