!fXpAvneDgyJuYMZSwO:nixos.org

Nix Data Science

279 Members
61 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
26 Jan 2024
@raitobezarius:matrix.orgraitobezariusAgain, we are not talking about cache.nixos.org, are we?15:35:28
@raitobezarius:matrix.orgraitobezariusLike even if you have pointers inside your remote storage, that does not change a lot that you are going to store a large buffer at some point somewhere, either in memory or on disk15:35:53
@benoitdr:matrix.orgbenoitdr
In reply to @raitobezarius:matrix.org
Again, we are not talking about cache.nixos.org, are we?
that's what I understood what maybe I got it wrong
15:36:08
@raitobezarius:matrix.orgraitobezarius"The Nix store" is not cache.nixos.org, right?15:36:32
@benoitdr:matrix.orgbenoitdr
In reply to @raitobezarius:matrix.org
Again, we are not talking about cache.nixos.org, are we?
* that's what I understood but maybe I got it wrong
15:36:44
@raitobezarius:matrix.orgraitobezariusEveryone who is using Nix has "the Nix store", in their local filesystem, it is in /nix/store15:36:46
@raitobezarius:matrix.orgraitobezariusBut yes, storing datasets in cache.nixos.org is mostly out of the question, at the moment, for the cost reasons you mentioned15:37:12
@raitobezarius:matrix.orgraitobezariusStoring datasets in your own Nix store seems an interesting question to me though15:37:20
@raitobezarius:matrix.orgraitobezariusCompared to… store it locally on-disk, mount it from a remote location and buffer it, etc.15:37:44
@benoitdr:matrix.orgbenoitdr fully agree on that ... trexd , Can you clarify your request ? 15:38:27
@trexd:matrix.org@trexd:matrix.org
In reply to @benoitdr:matrix.org
fully agree on that ... trexd , Can you clarify your request ?
I just meant /nix/store locally.
15:48:04
@benoitdr:matrix.orgbenoitdrOK sorry for confusion, personally I'll stick to a local minio server to store and share datasets locally. I tend to prefer a clear separation between code and data ;-)15:51:11
@benoitdr:matrix.orgbenoitdr

I have built a docker image of an application that requires nvidia/cuda using dockerTools.buildImage. The image runs fine with nvidia-docker -run .... however, when I start a container from the same image via docker-compose, I have the following error :

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: ldcache error: process /nix/store/cx01qk0qyylvkgisbwc7d3pk8sliccgh-glibc-2.38-27-bin/bin/ldconfig failed with error code: 1: unknown

I have added the few lines that are usually required to run nvidia/cuda images with docker-compose :

    deploy:
        resources:
          reservations:
            devices:
              - driver: nvidia
                device_ids: ['0']
                capabilities: [gpu]

Note that other images (from docker hub) that require nvidia/cuda are working fine with these few lines.
Any idea what could go wrong ?

20:49:40
@ss:someonex.netSomeone Sdidn't read but do check out the nvidia-container-toolkit cdi PR maybe21:50:43
29 Jan 2024
@bgrayburn:matrix.orgbgrayburn joined the room.16:36:08
31 Jan 2024
@federicodschonborn:matrix.org@federicodschonborn:matrix.org changed their profile picture.03:36:07
@alexou:femtodata.comAlex Ou joined the room.04:57:44
@federicodschonborn:matrix.org@federicodschonborn:matrix.org changed their profile picture.06:21:45
3 Feb 2024
@tanja-6584:matrix.orgTanja (Old; I'm now @tanja:catgirl.cloud) joined the room.02:50:47

Show newer messages


Back to Room ListRoom Version: 6