!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

290 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
4 Oct 2025
@glepage:matrix.orgGaƩtan LepageIf you have a bit of time to investigate, please go on :)23:12:12
@longregen:matrix.orglonYes, sorry I deleted because I saw your commit and is the same as mine (save for the update script! I didn't know that was a pattern people in nixpkgs used, TIL)23:13:31
@longregen:matrix.orglonthe nvidia/cutlass dependency can also be updated fwiw, with the update script 23:18:40
@longregen:matrix.orglonimage.png
Download image.png
23:18:43
@daniel-fahey:matrix.orgDaniel Faheyyeah, just started rewriting it23:23:25
@daniel-fahey:matrix.orgDaniel FaheyHow can you tell? Hydra? Got a link?23:27:58
@daniel-fahey:matrix.orgDaniel Fahey

Looks okay for me, some other problem? CUDA build?

[daniel@laptop:~/Source/nixpkgs]$ nix-build -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/b967613ed760449a73eaa73d7b69eb45e857ce1a.tar.gz --expr 'with import <nixpkgs> { }; python313Packages.vllm'
unpacking 'https://github.com/NixOS/nixpkgs/archive/b967613ed760449a73eaa73d7b69eb45e857ce1a.tar.gz' into the Git cache...
/nix/store/amncczb34wd5zingwclr3sqa6q7kahay-python3.13-vllm-0.11.0

[daniel@laptop:~/Source/nixpkgs]$ ./result/bin/vllm --help
INFO 10-05 00:44:00 [__init__.py:216] Automatically detected platform cpu.
usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch} ...

vLLM CLI

positional arguments:
  {chat,complete,serve,bench,collect-env,run-batch}
    chat                Generate chat completions via the running API server.
    complete            Generate text completions based on the given prompt via the running API server.
    collect-env         Start collecting environment information.
    run-batch           Run batch prompts and write results to file.

options:
  -h, --help            show this help message and exit
  -v, --version         show program's version number and exit

For full list:            vllm [subcommand] --help=all
For a section:            vllm [subcommand] --help=ModelConfig    (case-insensitive)
For a flag:               vllm [subcommand] --help=max-model-len  (_ or - accepted)
Documentation:            https://docs.vllm.ai
23:45:17

Show newer messages


Back to Room ListRoom Version: 9