!PbtOpdWBSRFbEZRLIf:numtide.com

Nix Community Projects

633 Members
Meta discussions related to https://nix-community.org. (For project specific discussions use github issues or projects own matrix channel). Need help from an admin? Open an issue on https://github.com/nix-community/infra/issues163 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
29 Aug 2024
@artur:glasgow.socialmoved to @amadaluzia:tchncs.de joined the room.05:44:34
30 Aug 2024
@artur:glasgow.socialmoved to @amadaluzia:tchncs.de changed their display name from Artur Manuel (old email was lost, migrating) to (lambda (u) (format nil "~A lost their email!" u)) "Artur Manuel".03:53:28
1 Sep 2024
@tumble1999:matrix.org@tumble1999:matrix.org joined the room.14:14:28
@tumble1999:matrix.org@tumble1999:matrix.org left the room.14:14:39
4 Sep 2024
@pheoxy:matrix.orgPheoxy [AWST/UTC+8] changed their display name from pheoxy to Pheoxy [AWST/UTC+8].17:33:15
@antifuchs:conduit.asf.computerantifuchs ⚡️ joined the room.18:17:28
@antifuchs:asf.computerantifuchs changed their display name from antifuchs to antifuchs ⚡️.18:57:28
@antifuchs:asf.computerantifuchs changed their display name from antifuchs ⚡️ to antifuchs.19:01:46
5 Sep 2024
@necoarc:transfem.dev@necoarc:transfem.dev joined the room.03:35:24
@necoarc:transfem.dev@necoarc:transfem.dev removed their display name Neco-Arc.03:49:32
@necoarc:transfem.dev@necoarc:transfem.dev removed their profile picture.03:49:32
@necoarc:transfem.dev@necoarc:transfem.dev left the room.03:49:32
@ss:someonex.netSomeoneSerge (matrix works sometimes) joined the room.11:56:04
@ss:someonex.netSomeoneSerge (matrix works sometimes)

Hi zowoq and Jonas Chevalier! I was wondering what has been your impression of the cuda jobset so far: whether you find it sustainable to build as is, whether it'd be sustainable to scale up? With the experience you've had so far, do you think nix-community should commit to keeping it alive and what'd it take to call it "stable" and announce it to the public?

About nix-community/infra in general, I understand you zowoq are "the official maintainer" and afaiu the infra is funded by the foundation. I wonder what is the general idea for nix-community's sustainability? Does infra require more labour that currently available? Who's taking over if you need to move to other things? Are there plans to get off the "cloud needle"?

Thanks

12:58:44
@antifuchs:asf.computerantifuchs left the room.13:58:58
@antifuchs:asf.computerantifuchs 14:25:15
@antifuchs:asf.computerantifuchs left the room.17:16:48
@aruzeta:matrix.org@aruzeta:matrix.org left the room.18:12:33
@zowoq:matrix.orgzowoq

I was wondering what has been your impression of the cuda jobset so far: whether you find it sustainable to build as is, whether it'd be sustainable to scale up?

Yes, it is sustainable. I don't understand what scale up would mean in this context?

With the experience you've had so far, do you think nix-community should commit to keeping it alive and what'd it take to call it "stable" and announce it to the public?

Yes, I think we can commit to it. It is stable enough that we added it to our docs a couple of days ago. https://nix-community.org/package-sets/

afaiu the infra is funded by the foundation.

No, we don't get any funding from the foundation. We have an open collective and some services are sponsored. https://opencollective.com/nix-community, https://nix-community.org/sponsors/

I wonder what is the general idea for nix-community's sustainability?

I'd say that basically it just depends on the open collective. @zimbatm may have a more nuanced answer for this.

Does infra require more labour that currently available?

No, not at the moment.

Who's taking over if you need to move to other things?

https://nix-community.org/administrators/ Nix community has five admins.

Are there plans to get off the "cloud needle"?

I'm assuming that this means getting our own hardware and finding somewhere to put it? Has been mentioned once or twice but nothing beyond that.

23:56:28
6 Sep 2024
@ss:someonex.netSomeoneSerge (matrix works sometimes)

I don't understand what scale up would mean in this context?

Building more "configs": e.g. the current aarch64 jobset only builds the variant for normal plug-in pcie GPUs not for jetson boards but we could extend it (which is probably the most common aarch64 user in fact...). The jobset also only chooses the "fat" variants which include code for all available gpu architectures at once. We could spawn variants for individual architectures, which is particularly relevant for embedded systems. I'd say we don't need to enable any of these at the moment, but if/when we discover this is needed it'd mean multiplying the cost. Similarly, adding the stable branch would add to another multiplicative factor, innocuous on its own but possibly significant when coupled with other toggles

00:09:33
@zowoq:matrix.orgzowoq

the current aarch64 jobset only builds the variant for normal plug-in pcie GPUs not for jetson boards but we could extend it (which is probably the most common aarch64 user in fact...).

If jetson is the more common use case should we switch to building that instead of the current aarch64 jobset?

The jobset also only chooses the "fat" variants which include code for all available gpu architectures at once. We could spawn variants for individual architectures, which is particularly relevant for embedded systems.

If we built the individual architectures would we still need to build the fat variant? Is building an individual architecture quicker than the "fat" variant?

02:29:43
@zowoq:matrix.orgzowoq *

the current aarch64 jobset only builds the variant for normal plug-in pcie GPUs not for jetson boards but we could extend it (which is probably the most common aarch64 user in fact...).

If jetson is the more common use case should we switch to building that instead of the current aarch64 jobset?

The jobset also only chooses the "fat" variants which include code for all available gpu architectures at once. We could spawn variants for individual architectures, which is particularly relevant for embedded systems.

If we built the individual architectures would we still need to build the "fat" variant? Is building an individual architecture quicker than the "fat" variant?

02:29:58

Show newer messages


Back to Room ListRoom Version: 6