| 10 Apr 2025 |
Tristan Ross | Yeah, that may not be a bad idea | 19:08:36 |
Alyssa Ross | Great that you're involved in this so proactively | 19:08:59 |
Tristan Ross | As a quick and dirty way to get started, I just did this:
diff --git a/pkgs/build-support/cc-wrapper/default.nix b/pkgs/build-support/cc-wrapper/default.nix
index 692474d48c42..262594e6dd15 100644
--- a/pkgs/build-support/cc-wrapper/default.nix
+++ b/pkgs/build-support/cc-wrapper/default.nix
@@ -845,6 +845,10 @@ stdenvNoCC.mkDerivation {
substituteAll ${./add-clang-cc-cflags-before.sh} $out/nix-support/add-local-cc-cflags-before.sh
''
+ + ''
+ echo "-Werror=write-strings" >> $out/nix-support/cc-cflags
+ ''
+
##
## Extra custom steps
##
| 19:08:59 |
Alyssa Ross | Yeah I'd just throw that up on a branch and get Hydra on it for now | 19:09:30 |
Alyssa Ross | I'm not sure whether we'd want to enable this for real before compilers start doing so | 19:11:46 |
Alyssa Ross | But getting a head start is good | 19:11:51 |
emily | agreed that preparing for this stuff ahead of time is great :) | 19:12:06 |
emily | I wish we had the resources to have an "experimental" jobset always running that we could throw stuff like this on | 19:12:27 |
emily | rather than dealing with things just-in-time during staging cycles | 19:12:35 |
Alyssa Ross | It's the sort of thing that somebody could probably run on a personal Hydra if they wanted I reckon? We don't need the results particularly fast given the bottleneck is actually going to be manually fixing things one at a time. | 19:13:44 |
emily | I just meant in general, put a bunch of upcoming breaking stuff behind a flag and collectively chip away at it gradually | 19:14:35 |
emily | agreed that it's not necessary for this, it'd just be a nice thing to have | 19:14:53 |
Alyssa Ross | We have pkgsExtraHardening also | 19:15:29 |
Alyssa Ross | Or whatever it's called | 19:15:36 |
Tristan Ross | In reply to @emilazy:matrix.org I wish we had the resources to have an "experimental" jobset always running that we could throw stuff like this on Yeah, I've been thinking of that | 19:20:43 |
Tristan Ross | I likely will be able to deploy my own Hydra locally | 19:22:27 |
Tristan Ross | I just got 2 128 core Ampere chips | 19:22:49 |
Alyssa Ross | Read that as 2128 cores :D | 19:23:02 |
Tristan Ross | Lol | 19:25:02 |
trofi | how many x86 cores it is? :) | 21:13:57 |
Tristan Ross | 0 | 21:16:49 |
Tristan Ross | But 128 cores of pure ARM power | 21:17:00 |
trofi | yeah, it was a silly joke about performance equivalence on typical workloads | 21:18:03 |
Tristan Ross | Oh | 21:21:16 |
| 11 Apr 2025 |
Tristan Ross | Randy Eckenrode in #staging:nixos.org (https://matrix.to/#/!UNVBThoJtlIiVwiDjU:nixos.org/$GOAB-0pj06IwTEidSgN3biXYZn_r9yZTViHoL6Y2Yns?via=nixos.org&via=matrix.org&via=tchncs.de)
You can’t compile the stage 1 LLVM with GCC then rebuild it again in a later stage with the stage 1 LLVM? (Are there complications with switching C++ ABIs?)
This is where my idea of splitting out the bootstrap tarballs would be useful. We could compile stage 1 LLVM from GCC but I feel like it may be a better solution to use LLVM prebuilt to build LLVM. Plus it'll likely be better on the eval since we wouldn't likely be needing odd exceptions or overrides.
| 16:37:10 |
Tristan Ross | There would be the base tarball with your general tools, then your compiler tarball. | 16:37:43 |
Tristan Ross | The base should be enough to set up a CC wrapper to point things to the right compiler. | 16:38:09 |
emily | the LLVM prebuilt would itself be built from GCC though, right? | 18:06:03 |
emily | I don't think it makes sense to have a proliferation of bootstrap tools especially when they shouldn't affect the end result | 18:06:15 |
| 15 Apr 2025 |
Tristan Ross | I'm guessing we could consider https://github.com/NixOS/nixpkgs/issues/307600 solved once we add documentation on how to add new libc's? The way to do it appears to be simple enough that I don't see how we could simplify it. | 05:57:58 |