| 6 Mar 2023 |
hexa | then they use scripts to install
- openssl
- libpng
- jni
- patchelf (yay)
- mkl
- conda (nay)
| 22:36:17 |
hexa | nice | 22:36:50 |
hexa | yeah, I'm tired. | 22:37:23 |
hexa | https://github.com/pytorch/builder/blob/3eb479e831d3d8bd80d6c71203a51a4d22f93c7f/libtorch/Dockerfile#L34 | 22:39:22 |
hexa | that is that docker file | 22:39:26 |
SomeoneSerge (back on matrix) | Half way there, we just need to know what to run inside the container | 22:40:01 |
hexa | maybe https://github.com/pytorch/builder/blob/3eb479e831d3d8bd80d6c71203a51a4d22f93c7f/wheel/build_wheel.sh | 22:40:21 |
SomeoneSerge (back on matrix) | I'm beginning to wonder if we should go back to the pytorch repo | 22:40:30 |
SomeoneSerge (back on matrix) | In reply to @hexa:lossy.network maybe https://github.com/pytorch/builder/blob/3eb479e831d3d8bd80d6c71203a51a4d22f93c7f/wheel/build_wheel.sh This one is, if we trust pytorch discourse, for Darwin | 22:40:54 |
SomeoneSerge (back on matrix) | And there's manywheel for Linux | 22:41:02 |
SomeoneSerge (back on matrix) | But all of these scripts are full of conditional flags | 22:41:10 |
SomeoneSerge (back on matrix) | So, somewhere out there something must call them with certain flags set on 🤔 | 22:41:31 |
hexa | https://github.com/pytorch/builder/blob/3eb479e831d3d8bd80d6c71203a51a4d22f93c7f/wheel/build_all.sh#L12 | 22:42:10 |
hexa | yep, darwin | 22:42:13 |
SomeoneSerge (back on matrix) | .github/workflows/_binary-build-linux.yml
203: docker exec -t "${container_name}" bash -c "source ${BINARY_ENV_FILE} && bash /builder/${{ inputs.PACKAGE_TYPE }}/build.sh"
| 22:44:35 |
SomeoneSerge (back on matrix) | .github/workflows/_linux-build.yml
161: docker exec -t "${container_name}" sh -c '.jenkins/pytorch/build.sh'
| 22:45:01 |
SomeoneSerge (back on matrix) | Yes, I think this is the right one: https://github.com/pytorch/pytorch/blob/39e8311a29b5713c8858cab73a8f713a7f3d531c/.github/workflows/_binary-build-linux.yml#L205
...but they still take the flags from elsewhere and just propagate them | 22:51:02 |
SomeoneSerge (back on matrix) | aaaaaand 0 workflows run https://github.com/pytorch/pytorch/actions/workflows/_binary-build-linux.yml | 22:51:55 |
hexa | yeah, why would they run that 😄 | 22:52:07 |
SomeoneSerge (back on matrix) | https://github.com/pytorch/pytorch/actions/runs/4337823562 | 22:54:01 |
SomeoneSerge (back on matrix) | Here https://github.com/pytorch/pytorch/actions/runs/4337823562/jobs/7574087583#step:14:305 | 22:55:00 |
SomeoneSerge (back on matrix) | -DBUILD_LIBTORCH_CPU_WITH_DEBUG=0
Ok, how do we check we don't have any debug symbols in our libs?
| 22:55:59 |
hexa | objdump --syms | 22:59:10 |
SomeoneSerge (back on matrix) | -DUSE_NCCL=1 | 22:59:17 |
SomeoneSerge (back on matrix) | hmmm, I didn't even know it can be built without cuda | 23:00:18 |
SomeoneSerge (back on matrix) | In reply to @hexa:lossy.network objdump --syms Seems fine | 23:04:58 |
hexa | agreed | 23:05:05 |
SomeoneSerge (back on matrix) | -DUSE_FBGEMM? | 23:07:02 |
hexa |
FBGEMM (Facebook GEneral Matrix Multiplication) is a low-precision, high-performance matrix-matrix multiplications and convolution library
| 23:09:25 |
SomeoneSerge (back on matrix) | ❯ nix log nixpkgs#python3Packages.torch
...
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : ON
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
...
| 23:09:47 |