| 10 Apr 2025 |
Alyssa Ross | It's the sort of thing that somebody could probably run on a personal Hydra if they wanted I reckon? We don't need the results particularly fast given the bottleneck is actually going to be manually fixing things one at a time. | 19:13:44 |
emily | I just meant in general, put a bunch of upcoming breaking stuff behind a flag and collectively chip away at it gradually | 19:14:35 |
emily | agreed that it's not necessary for this, it'd just be a nice thing to have | 19:14:53 |
Alyssa Ross | We have pkgsExtraHardening also | 19:15:29 |
Alyssa Ross | Or whatever it's called | 19:15:36 |
Tristan Ross | In reply to @emilazy:matrix.org I wish we had the resources to have an "experimental" jobset always running that we could throw stuff like this on Yeah, I've been thinking of that | 19:20:43 |
Tristan Ross | I likely will be able to deploy my own Hydra locally | 19:22:27 |
Tristan Ross | I just got 2 128 core Ampere chips | 19:22:49 |
Alyssa Ross | Read that as 2128 cores :D | 19:23:02 |
Tristan Ross | Lol | 19:25:02 |
trofi | how many x86 cores it is? :) | 21:13:57 |
Tristan Ross | 0 | 21:16:49 |
Tristan Ross | But 128 cores of pure ARM power | 21:17:00 |
trofi | yeah, it was a silly joke about performance equivalence on typical workloads | 21:18:03 |
Tristan Ross | Oh | 21:21:16 |
| 11 Apr 2025 |
Tristan Ross | Randy Eckenrode in #staging:nixos.org (https://matrix.to/#/!UNVBThoJtlIiVwiDjU:nixos.org/$GOAB-0pj06IwTEidSgN3biXYZn_r9yZTViHoL6Y2Yns?via=nixos.org&via=matrix.org&via=tchncs.de)
You can’t compile the stage 1 LLVM with GCC then rebuild it again in a later stage with the stage 1 LLVM? (Are there complications with switching C++ ABIs?)
This is where my idea of splitting out the bootstrap tarballs would be useful. We could compile stage 1 LLVM from GCC but I feel like it may be a better solution to use LLVM prebuilt to build LLVM. Plus it'll likely be better on the eval since we wouldn't likely be needing odd exceptions or overrides.
| 16:37:10 |
Tristan Ross | There would be the base tarball with your general tools, then your compiler tarball. | 16:37:43 |
Tristan Ross | The base should be enough to set up a CC wrapper to point things to the right compiler. | 16:38:09 |
emily | the LLVM prebuilt would itself be built from GCC though, right? | 18:06:03 |
emily | I don't think it makes sense to have a proliferation of bootstrap tools especially when they shouldn't affect the end result | 18:06:15 |
| 15 Apr 2025 |
Tristan Ross | I'm guessing we could consider https://github.com/NixOS/nixpkgs/issues/307600 solved once we add documentation on how to add new libc's? The way to do it appears to be simple enough that I don't see how we could simplify it. | 05:57:58 |
aleksana 🏳️⚧️ (force me to bed after 18:00 UTC) | I want to provide a bootstrap tarball for loongarch64-unknown-linux-gnu, do I have to provide anything to prove the tarball is clean? | 07:45:37 |
K900 | Honestly, I don't think we should ever allow that | 07:46:38 |
K900 | The bootstrap tools should be built on clean infra | 07:46:50 |
K900 | We can cross-compile them though | 07:46:54 |
Alyssa Ross | That's the usual process, yeah | 07:47:01 |
aleksana 🏳️⚧️ (force me to bed after 18:00 UTC) | So I can request infra team to generate a tarball for me? | 07:47:50 |
Alyssa Ross | Not quite | 07:47:59 |
Alyssa Ross | You add a Hydra job to build it | 07:48:06 |
Alyssa Ross | If there isn't already one | 07:48:11 |