| 19 Feb 2025 |
Arian | that timeout is set by AWS. usually AWS timeouts are pretty aligned with the normal behaviour of services | 22:43:37 |
Arian | might be AWS is having a bad time | 22:43:38 |
adamcstephens | the imports are completing, but only after the waiter times out :/ | 22:50:10 |
| 20 Feb 2025 |
| @hexa:lossy.network left the room. | 02:39:47 |
adamcstephens | It took me longer to upload due to my internet connection, but once the snapshot is imported registering it as an AMI is almost instant. | 13:51:42 |
adamcstephens | This route also seems to be more space efficient in Amazon's reading of the size. A snapshot using the current import-snapshot shows as the full size of the disk image used during build, but a coldsnap uploaded image shows the actual disk usage of the image. | 13:53:22 |
adamcstephens | * It took me longer to upload due to my internet connection, but once the snapshot is uploaded, then registering it as an AMI is almost instant. | 13:54:27 |
adamcstephens | hmm, but that snapshot doesn't seem to work for launching instances. likely i did something wrong :/ | 15:01:49 |
adamcstephens | I'm extending the timeout for our use, since a snapshot is still imported even if the waiter times out. But re-running upload-ami will initiate another import-snapshot, ignoring the already imported one | 15:25:30 |
| 22 Feb 2025 |
Arian | Oh you need to coldsnap over a raw image | 11:01:35 |
Arian | You're probably coldsnapping the sparse VHD lol | 11:01:49 |
Arian | Which will not work | 11:01:52 |
Arian | That's why it shows up as smaller | 11:02:12 |
Arian | So build the image with format = raw | 11:02:38 |
Arian | And then upload | 11:02:40 |
Arian | * That's why it shows up as smaller. You cant upload vhd format. You need to upload the raw block devices bytes. | 11:03:30 |
adamcstephens | That makes complete sense. Thanks | 12:59:51 |
| 24 Feb 2025 |
commiterate | Arian did you get around to trying the fluent-bit PR? | 17:30:44 |
Arian | It's been on my backlog. Work was really packed the past few weeks. But I'll get to it as we need the changes internally | 19:12:57 |
| 27 Feb 2025 |
| drewhaven joined the room. | 20:06:37 |
drewhaven | Redacted or Malformed Event | 20:10:30 |
drewhaven | Does SSM work will with on-prem NixOS installs? I'm looking to set up a bunch of headless NUCs that are deployed to a bunch of different locations. I want the predictability of Nix configs and flakes, but I'm not sure how the remote management will work. SSM seems to imply that it wants a mutable system to manage, but I guess that's just handled with some scripts that use Nix tools for changes, upgrades and rollbacks? | 20:21:51 |
drewhaven | * Does SSM work well with on-prem NixOS installs? I'm looking to set up a bunch of headless NUCs that are deployed to a bunch of different locations. I want the predictability of Nix configs and flakes, but I'm not sure how the remote management will work. SSM seems to imply that it wants a mutable system to manage, but I guess that's just handled with some scripts that use Nix tools for changes, upgrades and rollbacks? | 20:21:59 |
Arian | I have never tried it but I see no reason why it wouldn’t work | 20:40:10 |
Arian | it might need some changes to the nixos module to support the on-prem ssm join token stuff | 20:40:38 |
Arian | We (mercury.com) are about to open source some terraform modules that we use for deploying NixOS using SSM | 20:41:06 |
Arian | we basically have an SSM Document that does a nixos-rebuild switch | 20:41:23 |
Arian | i can probably get that published tomorrow | 20:42:25 |
| pykee03 joined the room. | 21:24:49 |
drewhaven | This'll be a new type of deployment for me. I'm used to k8s clusters where it's easy to just start new stuff. Been decades since I had to manage an actual system. :D | 23:22:44 |