26 Jan 2024 |
raitobezarius | Again, we are not talking about cache.nixos.org, are we? | 15:35:28 |
raitobezarius | Like even if you have pointers inside your remote storage, that does not change a lot that you are going to store a large buffer at some point somewhere, either in memory or on disk | 15:35:53 |
benoitdr | In reply to @raitobezarius:matrix.org Again, we are not talking about cache.nixos.org, are we? that's what I understood what maybe I got it wrong | 15:36:08 |
raitobezarius | "The Nix store" is not cache.nixos.org, right? | 15:36:32 |
benoitdr | In reply to @raitobezarius:matrix.org Again, we are not talking about cache.nixos.org, are we? * that's what I understood but maybe I got it wrong | 15:36:44 |
raitobezarius | Everyone who is using Nix has "the Nix store", in their local filesystem, it is in /nix/store | 15:36:46 |
raitobezarius | But yes, storing datasets in cache.nixos.org is mostly out of the question, at the moment, for the cost reasons you mentioned | 15:37:12 |
raitobezarius | Storing datasets in your own Nix store seems an interesting question to me though | 15:37:20 |
raitobezarius | Compared to… store it locally on-disk, mount it from a remote location and buffer it, etc. | 15:37:44 |
benoitdr | fully agree on that ... trexd , Can you clarify your request ? | 15:38:27 |
@trexd:matrix.org | In reply to @benoitdr:matrix.org fully agree on that ... trexd , Can you clarify your request ? I just meant /nix/store locally. | 15:48:04 |
benoitdr | OK sorry for confusion, personally I'll stick to a local minio server to store and share datasets locally. I tend to prefer a clear separation between code and data ;-) | 15:51:11 |
benoitdr | I have built a docker image of an application that requires nvidia/cuda using dockerTools.buildImage. The image runs fine with nvidia-docker -run .... however, when I start a container from the same image via docker-compose, I have the following error :
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: ldcache error: process /nix/store/cx01qk0qyylvkgisbwc7d3pk8sliccgh-glibc-2.38-27-bin/bin/ldconfig failed with error code: 1: unknown
I have added the few lines that are usually required to run nvidia/cuda images with docker-compose :
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
Note that other images (from docker hub) that require nvidia/cuda are working fine with these few lines. Any idea what could go wrong ?
| 20:49:40 |
Someone S | didn't read but do check out the nvidia-container-toolkit cdi PR maybe | 21:50:43 |
29 Jan 2024 |
| bgrayburn joined the room. | 16:36:08 |
31 Jan 2024 |
| @federicodschonborn:matrix.org changed their profile picture. | 03:36:07 |
| Alex Ou joined the room. | 04:57:44 |
| @federicodschonborn:matrix.org changed their profile picture. | 06:21:45 |
3 Feb 2024 |
| Tanja (Old; I'm now @tanja:catgirl.cloud) joined the room. | 02:50:47 |