Hugo | nix-build -I nixpkgs=. --arg config '{ allowUnfree = true; cudaSupport = true;}' -A python313Packages.triton.tests.axpy-cuda.gpuCheck
this derivation will be built:
/nix/store/2m1zkm221qr6ziw2qkbds3r37r57f7xj-test-cuda.drv
building '/nix/store/2m1zkm221qr6ziw2qkbds3r37r57f7xj-test-cuda.drv'...
Traceback (most recent call last):
File "/nix/store/biwmrywsnh5nvfxg13d319cx65956rvc-tester-cuda/bin/tester-cuda", line 38, in <module>
x = torch.rand(size, device='cuda')
File "/nix/store/419qp86g5l617y4pv5m0fgj04rhfnxrp-python3-3.13.6-env/lib/python3.13/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init
torch._C._cuda_init()
~~~~~~~~~~~~~~~~~~~^^
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
error: builder for '/nix/store/2m1zkm221qr6ziw2qkbds3r37r57f7xj-test-cuda.drv' failed with exit code 1;
last 7 log lines:
> Traceback (most recent call last):
> File "/nix/store/biwmrywsnh5nvfxg13d319cx65956rvc-tester-cuda/bin/tester-cuda", line 38, in <module>
> x = torch.rand(size, device='cuda')
> File "/nix/store/419qp86g5l617y4pv5m0fgj04rhfnxrp-python3-3.13.6-env/lib/python3.13/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init
> torch._C._cuda_init()
> ~~~~~~~~~~~~~~~~~~~^^
> RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
For full logs, run:
nix log /nix/store/2m1zkm221qr6ziw2qkbds3r37r57f7xj-test-cuda.drv
python
Python 3.12.11 (main, Jun 3 2025, 15:41:47) [GCC 14.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
nix-shell -I nixpkgs=. --arg config '{ allowUnfree = true; cudaSupport = true;}' -p python312Packages.torch
[nix-shell:~/Repos/hoh/nixpkgs]$ python
Python 3.12.11 (main, Jun 3 2025, 15:41:47) [GCC 14.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
| 09:36:11 |
Hugo | Thanks connor (he/him) (UTC+2) .
I can now launch the triton test.
However, when attempting to launch tests on the unstloth library, nix builds Torch for Python313 instead of Python 312. Torch is not supported on Python 3.13 yet - it still attempts to build however which confuses me.
diff --git a/pkgs/development/python-modules/unsloth/default.nix b/pkgs/development/python-modules/unsloth/default.nix
index 73f94721b5e0..e6473c3bfa1d 100644
--- a/pkgs/development/python-modules/unsloth/default.nix
+++ b/pkgs/development/python-modules/unsloth/default.nix
@@ -27,6 +27,9 @@
hf-transfer,
diffusers,
torchvision,
+
+ # tests
+ cudaPackages,
}:
buildPythonPackage rec {
@@ -85,6 +88,19 @@ buildPythonPackage rec {
# NotImplementedError: Unsloth: No NVIDIA GPU found? Unsloth currently only supports GPUs!
dontUsePythonImportsCheck = true;
+ passthru.tests = {
+ import-cuda = cudaPackages.writeGpuTestPython
+ {
+ libraries = ps: [
+ ps.torch
+ ];
+ }
+ ''
+ import unsloth
+ unsloth.test()
+ '';
+ };
+
meta = {
description = "Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory";
homepage = "https://github.com/unslothai/unsloth";
nix-build -I nixpkgs=. --arg config '{ allowUnfree = true; cudaSupport = true;}' -A python312Packages.unsloth.tests.import-cuda
| 07:12:50 |
matthewcroughan | adrian-gierakowski:
!!! Exception during processing !!! HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/execution.py", line 277, in process_inputs
result = f(**inputs)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/nodes.py", line 74, in encode
return (clip.encode_from_tokens_scheduled(tokens), )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd1_clip.py", line 689, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd1_clip.py", line 291, in encode
return self(tokens)
File "/nix/store/jzm64j9dp50xs770h3w7n8h9pj6mpkjp-python3.13-torch-2.8.0/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/nix/store/jzm64j9dp50xs770h3w7n8h9pj6mpkjp-python3.13-torch-2.8.0/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/sd1_clip.py", line 204, in process_tokens
tokens_embed = self.transformer.get_input_embeddings()(tokens_embed, out_dtype=torch.float32)
File "/nix/store/jzm64j9dp50xs770h3w7n8h9pj6mpkjp-python3.13-torch-2.8.0/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/nix/store/jzm64j9dp50xs770h3w7n8h9pj6mpkjp-python3.13-torch-2.8.0/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/ops.py", line 270, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/nix/store/dg5g3ypdsjvy0274156l74klx4wr0nbx-comfyui-unstable-2025-09-06/lib/python3.13/site-packages/comfy/ops.py", line 266, in forward_comfy_cast_weights
return torch.nn.functional.embedding(input, weight, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse).to(dtype=output_dtype)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/jzm64j9dp50xs770h3w7n8h9pj6mpkjp-python3.13-torch-2.8.0/lib/python3.13/site-packages/torch/nn/functional.py", line 2546, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.AcceleratorError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
| 19:53:26 |