!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

211 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda42 Servers

Load older messages


SenderMessageTime
17 Sep 2024
@connorbaker:matrix.orgconnor (he/him) (UTC-7) SomeoneSerge (utc+3): if you have a chance, would you take one last look at https://github.com/NixOS/nixpkgs/pull/339619? I added tests (several of which fail due to Torch requiring Magma be built with the same version of CUDA, which is something I'll handle in a follow-up PR) 15:39:56
18 Sep 2024
@evax:matrix.orgevaxWe have a flake based setup using the nixos cache, the cuda-maintainer cache and our own private cache. For some reason on our CI system cuda_nvcc always ends up being rebuilt from scratch while we don't have the problem when developing locally - does anybody have any idea regarding what could cause this? At the end of the CI build, we sign recursively anything linked to ./result and upload to our private cache.06:30:55
@keiichi:matrix.orgtetoyou can diff the two derivations. Was it nix-diff that showed a nice result ?10:37:13
@myrkskog:matrix.orgmyrkskogRedacted or Malformed Event12:52:27
@myrkskog:matrix.orgmyrkskogAnyone know which Linux kernel and driver is most stable and performent for a Quadro RTX 4000? Finding it hard to gather this information.13:06:17
@ss:someonex.netSomeoneSerge (utc+3)Mhm we should make a wiki page with a list of setups we run13:31:24
@myrkskog:matrix.orgmyrkskogGreat I’ll have a look13:31:50
@myrkskog:matrix.orgmyrkskog* Great I’ll have a look. Thank you.13:32:08
@ss:someonex.netSomeoneSerge (utc+3)I mean there isn't one yet13:32:13
@ss:someonex.netSomeoneSerge (utc+3)Just acknowledging there is a visibility/discoverability issue here, and we could just do something like what nixos-mobile or postmarketos do: a table with contributors and their devices, and the modules and packages they actively use13:33:40
@ss:someonex.netSomeoneSerge (utc+3) * Just acknowledging there is a visibility/discoverability issue here, and we could just do something like what nixos-mobile or postmarketos do: a table with contributors and their devices, and their caches, and the modules and packages they actively use13:34:23
@myrkskog:matrix.orgmyrkskogGot it. Well that would be fantastic 👍13:37:42
19 Sep 2024
@pascal.grosmann:scs.ems.host@pascal.grosmann:scs.ems.host changed their display name from Pascal Grosmann - Urlaub 🚐 🏝️ 🏄‍♂️ 18.05. - 15.09. to Pascal Grosmann.06:33:34
@pascal.grosmann:scs.ems.host@pascal.grosmann:scs.ems.host set a profile picture.12:28:30
@pascal.grosmann:scs.ems.host@pascal.grosmann:scs.ems.host removed their profile picture.12:28:55
21 Sep 2024
@aidalgol:matrix.orgaidalgol Not CUDA-related, but Nvidia-specific: I have no idea where to even start troubleshooting this: https://github.com/NixOS/nixpkgs/pull/341219#issuecomment-2365253518 22:20:37
23 Sep 2024
@connorbaker:matrix.orgconnor (he/him) (UTC-7) changed their display name from connor (he/him) (UTC-5) to connor (he/him) (UTC-7).17:57:52
@connorbaker:matrix.orgconnor (he/him) (UTC-7) Kevin Mittman: does NVIDIA happen to have JSON (or otherwise structured) versions of their dependency constraints for packages somewhere, or are the tables on the docs for each respective package the only source? I'm working on update scripts and I'd like to avoid the manual stage of "go look on the website, find the table (it may have moved), and encode the contents as a Nix expression" 18:39:25
24 Sep 2024
@pascal.grosmann:scs.ems.host@pascal.grosmann:scs.ems.host set a profile picture.08:56:22
@hexa:lossy.networkhexa (UTC+1)
_______ TestKernelLinearOperatorLinOpReturn.test_solve_matrix_broadcast ________

self = <test.operators.test_kernel_linear_operator.TestKernelLinearOperatorLinOpReturn testMethod=test_solve_matrix_broadcast>

    def test_solve_matrix_broadcast(self):
        linear_op = self.create_linear_op()
    
        # Right hand size has one more batch dimension
        batch_shape = torch.Size((3, *linear_op.batch_shape))
        rhs = torch.randn(*batch_shape, linear_op.size(-1), 5)
        self._test_solve(rhs)
    
        if linear_op.ndimension() > 2:
            # Right hand size has one fewer batch dimension
            batch_shape = torch.Size(linear_op.batch_shape[1:])
            rhs = torch.randn(*batch_shape, linear_op.size(-1), 5)
>           self._test_solve(rhs)

linear_operator/test/linear_operator_test_case.py:1115: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
linear_operator/test/linear_operator_test_case.py:615: in _test_solve
    self.assertAllClose(arg.grad, arg_copy.grad, **self.tolerances["grad"])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <test.operators.test_kernel_linear_operator.TestKernelLinearOperatorLinOpReturn testMethod=test_solve_matrix_broadcast>
tensor1 = tensor([[[[ 1.8514e+04,  7.1797e+03, -1.1073e+04, -6.6690e+03,  1.2985e+04,
            6.8468e+03],
          [ 1.685...  -3.0153e+04],
          [-9.0042e+04, -1.3429e+04, -3.1822e+04,  1.3839e+04,  5.9735e+04,
           -5.4315e+04]]]])
tensor2 = tensor([[[[ 1.8514e+04,  7.1797e+03, -1.1073e+04, -6.6690e+03,  1.2985e+04,
            6.8468e+03],
          [ 1.685...  -3.0153e+04],
          [-9.0042e+04, -1.3429e+04, -3.1822e+04,  1.3839e+04,  5.9735e+04,
           -5.4315e+04]]]])
rtol = 0.03, atol = 1e-05, equal_nan = False

    def assertAllClose(self, tensor1, tensor2, rtol=1e-4, atol=1e-5, equal_nan=False):
        if not tensor1.shape == tensor2.shape:
            raise ValueError(f"tensor1 ({tensor1.shape}) and tensor2 ({tensor2.shape}) do not have the same shape.")
    
        if torch.allclose(tensor1, tensor2, rtol=rtol, atol=atol, equal_nan=equal_nan):
            return True
    
        if not equal_nan:
            if not torch.equal(tensor1, tensor1):
                raise AssertionError(f"tensor1 ({tensor1.shape}) contains NaNs")
            if not torch.equal(tensor2, tensor2):
                raise AssertionError(f"tensor2 ({tensor2.shape}) contains NaNs")
    
        rtol_diff = (torch.abs(tensor1 - tensor2) / torch.abs(tensor2)).view(-1)
        rtol_diff = rtol_diff[torch.isfinite(rtol_diff)]
        rtol_max = rtol_diff.max().item()
    
        atol_diff = (torch.abs(tensor1 - tensor2) - torch.abs(tensor2).mul(rtol)).view(-1)
        atol_diff = atol_diff[torch.isfinite(atol_diff)]
        atol_max = atol_diff.max().item()
    
>       raise AssertionError(
            f"tensor1 ({tensor1.shape}) and tensor2 ({tensor2.shape}) are not close enough. \n"
            f"max rtol: {rtol_max:0.8f}\t\tmax atol: {atol_max:0.8f}"
        )
E       AssertionError: tensor1 (torch.Size([2, 3, 4, 6])) and tensor2 (torch.Size([2, 3, 4, 6])) are not close enough. 
E       max rtol: 0.03577567            max atol: 0.00741313

linear_operator/test/base_test_case.py:46: AssertionError
11:40:36
@hexa:lossy.networkhexa (UTC+1)I think this one has been failing for me on the linear-operator package11:41:02
@connorbaker:matrix.orgconnor (he/him) (UTC-7) As a sanity check — has anyone been able to successfully use torch.compile to speed up model training, or do they also get a python stack trace when torch tries to call into OpenAI’s triton 15:23:08
25 Sep 2024
@ss:someonex.netSomeoneSerge (utc+3)It used to work but now our t2iton is lagging 1 major version behind19:36:58
@glepage:matrix.orgGaétan LepageBecause those geniuses are not able to tag a freaking release20:20:55
@glepage:matrix.orgGaétan Lepage https://github.com/triton-lang/triton/issues/3535 20:21:18
@ss:someonex.netSomeoneSerge (utc+3)unstable-yyyy-mm-dd is ok for us; there were some minor but unresolved issues with the PR that does the bump though20:23:04
26 Sep 2024
@connorbaker:matrix.orgconnor (he/him) (UTC-7)
In reply to @glepage:matrix.org
https://github.com/triton-lang/triton/issues/3535
Well that’s an infuriating read
16:33:18
@glepage:matrix.orgGaétan LepageIt's OK, OpenAI is just a small startup with only a few people. And deep learning is not even their main activity17:07:38
@connorbaker:matrix.orgconnor (he/him) (UTC-7) Yeah and they're definitely not a for-profit organization 17:20:14
@adam:robins.wtfadamcstephens"open" is in their name17:24:26

Show newer messages


Back to Room ListRoom Version: 9