!xmLtiCaAJxfhURjrXl:matrix.org

NixOS RISC-V

94 Members
NixOS on RISC-V https://wiki.nixos.org/wiki/RISC-V https://pad.lassul.us/NixOS-riscv64-linux 35 Servers

Load older messages


SenderMessageTime
4 Apr 2024
@fgaz:matrix.orgfgaz hexa: as far as I know our main limitation is rack space (ping Mic92). the upcoming milk-v oasis seems like a more efficient use of that space 09:31:27
@hexa:lossy.networkhexahuh? rack space where?09:31:43
@hexa:lossy.networkhexaTUM?09:32:01
@thefossguy:matrix.orgPratham Patel (you can mention me)
In reply to @fgaz:matrix.org
hexa: as far as I know our main limitation is rack space (ping Mic92). the upcoming milk-v oasis seems like a more efficient use of that space

The third RISC-V SoM from Sipeed is gonna have the same SG2380 SoC that the Oasis will too. Not sure on the max memory that they will ship. So for CPU-bound compiles, this SoM should be better if $ and space are an issue.

https://twitter.com/sipeedio/status/1774644666375524659

09:34:19
@fgaz:matrix.orgfgaz
In reply to @hexa:lossy.network
huh? rack space where?
In the nix community rack. Last time we discussed this I think that was the candidate for hosting a builder. I don't know more than that
09:35:32
@julienmalka:matrix.orgJulien And the pioneer is a no go ? 09:36:50
@thefossguy:matrix.orgPratham Patel (you can mention me)
In reply to @julienmalka:matrix.org
And the pioneer is a no go ?
Yes.
09:37:02
@thefossguy:matrix.orgPratham Patel (you can mention me) All machines with the C910 core are only good for test builds and cannot be "trusted" (not in context of a backdoor but it is not compliant to the spec). 09:38:00
@thefossguy:matrix.orgPratham Patel (you can mention me)The "v2" of C920 is supposed to be more compliant with the spec but no one has had hands-on with it yet.09:38:48
@shalokshalom:kde.org@shalokshalom:kde.org joined the room.10:05:05
@shalokshalom:kde.org@shalokshalom:kde.orgHi there. I heard about the attempts to bootstrap the GHC on NixOS RISC-V. I guess you tried the LLVM backend? Sorry if thats a bit naive, I guess there is a very good reason, why this wouldnt work. The Github ticket around this issue also mentions, that this is possible. Is the backend not capable to compile itself?10:31:29
@shalokshalom:kde.org@shalokshalom:kde.org * Hi there. I heard about the attempts to bootstrap the GHC on NixOS RISC-V. I guess you tried the LLVM backend? Sorry if thats a bit naive, I guess there is a very good reason, why this wouldnt work. The Github ticket around this issue also mentions, that this is possible. Is the LLVM backend not capable to compile GHC?10:32:42
@shalokshalom:kde.org@shalokshalom:kde.orgI do see the GraalVM also as a potential tool to bootstrap Haskell on RISC-V, although I havent tried that yet. They provide both JIT and compiled (they call it native image) methods to run on RISC-V, and Haskell supposedly runs on it with the Sulong implementation. Just wanted to drop it, in case someone didnt knew about that yet (sorry if obviously not helpful, as said.) 10:37:16
@skeuchel:matrix.orgSteven KeuchelYou can always build GHC with an unregisterised backend (via C), and use that to bootstrap. But that is painfully slow. There is no NCG backend and not runtime linker yet, but that's in progress. The LLVM backend "works" as of 9.6 (or 9.4 with newer llvm like in debian).10:39:52
@eyjhb:eyjhb.dkeyJhb joined the room.11:21:01
@alex:tunstall.xyzAlex
In reply to @skeuchel:matrix.org
You can always build GHC with an unregisterised backend (via C), and use that to bootstrap. But that is painfully slow. There is no NCG backend and not runtime linker yet, but that's in progress. The LLVM backend "works" as of 9.6 (or 9.4 with newer llvm like in debian).

In my testing using an unregisterised boot GHC, it usually takes around 20 hours to natively build GHC on the JH7110 SoC. Longer if other builds are running in parallel (I've had one GHC build take ~35 hours).

I can't comment on how much faster registerised via LLVM is because my registerised builds keep segfaulting...

12:19:51
@skeuchel:matrix.orgSteven KeuchelHere are my estimates On the pioneer: Unregisterised release+profiled_libs: >30h Unregisterised quick+no_profiled_libs: 18h Registerised release+profiled_libs: 12h Registerised quick+no_profiled_libs: 9h Using qemu user-mode Registerised release+profiled_libs: 8h Registerised quick+no_profiled_libs: 6h 12:24:11
@alex:tunstall.xyzAlex

GHC is quite tricky to compile, so I'd be pleasantly surprised if Sulong were capable of handling it.

Historically, using Hugs to run GHC on itself has been an option, but AFAIK Hugs doesn't support 64-bit ISAs and it also has a relatively low limit on program size that makes bootstrapping GHC even on x86 a nightmare. I don't know what it would take to support RV64GC and I haven't explored patching Hugs to raise the program size limitations.

12:24:58
@alex:tunstall.xyzAlexAlso Hugs requires an ancient version of GCC.12:25:47
@alex:tunstall.xyzAlex

Looking into Sulong, apparently it's not a Haskell compiler/interpreter but an LLVM bitcode interpreter?

That doesn't seem suitable for compiling GHC (Haskell code) from source.
LLVM bitcode isn't the problem here.

12:29:37
@shalokshalom:kde.org@shalokshalom:kde.org Graal and Sulong are able to produce a native image of Haskell code 12:41:42
@shalokshalom:kde.org@shalokshalom:kde.org Graal provides two runtimes: JVM and Truffle. Sulong is the LLVM implementation on Truffle 12:42:09
@shalokshalom:kde.org@shalokshalom:kde.org Hugs is even older than Eta, so I doubt very much it can compile any modern Haskell code at all? 12:42:35
@thefossguy:matrix.orgPratham Patel (you can mention me)
In reply to @skeuchel:matrix.org
Here are my estimates

On the pioneer:
Unregisterised release+profiled_libs: >30h
Unregisterised quick+no_profiled_libs: 18h
Registerised release+profiled_libs: 12h
Registerised quick+no_profiled_libs: 9h

Using qemu user-mode
Registerised release+profiled_libs: 8h
Registerised quick+no_profiled_libs: 6h

Yeah, the multi-core interconnects are only present to connect the cores, not much more. i.e. not how 64-cores are interconnected on threadrippers/eypcs;

So here, qemu-emulation on x86 will be faster tbh

12:43:10
@skeuchel:matrix.orgSteven Keuchel
In reply to @thefossguy:matrix.org

Yeah, the multi-core interconnects are only present to connect the cores, not much more. i.e. not how 64-cores are interconnected on threadrippers/eypcs;

So here, qemu-emulation on x86 will be faster tbh

Most of the stuff I compile is quicker on the pioneer than user-mode emulations, so there's still something GHC-specific to it. Compiling w/o ilbnuma? Larger caches on x86? More "symbolic computations" in comparison to gcc?
12:57:44
@thefossguy:matrix.orgPratham Patel (you can mention me)There's obviously a lot of moving parts to this :)12:58:43
@thefossguy:matrix.orgPratham Patel (you can mention me)What I meant to say was, you're not actually using all 64-cores on the pioneer "efficiently" because the interconnects aren't well. It's a first gen product. Impressive that they could even pull it off, a first gen product nonetheless.12:59:42
@alex:tunstall.xyzAlex
In reply to @shalokshalom:kde.org
Hugs is even older than Eta, so I doubt very much it can compile any modern Haskell code at all?
It doesn't need to. It only needs to be able to interpret an old version of GHC, then the build can work its way up to a modern GHC.
13:43:26
@shalokshalom:kde.org@shalokshalom:kde.org Yeah, true. 14:03:45
@shalokshalom:kde.org@shalokshalom:kde.org Well then, Eta might be a choice. It has a native Haskell compiler for 7 and even some features of 8, probably better than Hugs 🤷 14:05:07

Show newer messages


Back to Room ListRoom Version: 10