Skip to content

Initializing libiomp5.dylib, but found libomp.dylib already initialized. #78490

@datumbox

Description

@datumbox

🐛 Describe the bug

The issue appears on MacOS py3.8, it started after updating to the latest nightly 1.13.0.dev20220525-py3.8_0 from core (previously I was at 1.12.0.dev20220309-py3.8_0, so the issue could have been introduced earlier than May 25th). I'm receiving the following after importing numpy and pytorch together:

$ python -c "import numpy;import torch"
OMP: Error pytorch/vision#15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
python3 -mtorch.utils.collect_env
Collecting environment information...
PyTorch version: 1.13.0.dev20220525
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 12.3.1 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.21)
CMake version: version 3.18.4
Libc version: N/A

Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:50:38)  [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy==0.931
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220525
[pip3] torchdata==0.4.0a0+652986b
[pip3] torchvision==0.14.0a0+9a72fd6
[pip3] torchviz==0.0.2
[conda] blas                      2.112                       mkl    conda-forge
[conda] blas-devel                3.9.0              12_osx64_mkl    conda-forge
[conda] efficientnet-pytorch      0.7.1                    pypi_0    pypi
[conda] libblas                   3.9.0              12_osx64_mkl    conda-forge
[conda] libcblas                  3.9.0              12_osx64_mkl    conda-forge
[conda] liblapack                 3.9.0              12_osx64_mkl    conda-forge
[conda] liblapacke                3.9.0              12_osx64_mkl    conda-forge
[conda] mkl                       2021.4.0           h89fa619_689    conda-forge
[conda] mkl-devel                 2021.4.0           h694c41f_690    conda-forge
[conda] mkl-include               2021.4.0           hf224eb6_689    conda-forge
[conda] numpy                     1.22.4           py38h3ad0702_0    conda-forge
[conda] pytorch                   1.13.0.dev20220525         py3.8_0    pytorch-nightly
[conda] torchdata                 0.4.0a0+652986b          pypi_0    pypi
[conda] torchvision               0.14.0a0+9a72fd6           dev_0    <develop>
[conda] torchviz                  0.0.2                    pypi_0    pypi

Strangely, importing first torch works:

python -c "import torch;import numpy;print('works')"
works

Setting KMP_DUPLICATE_LIB_OK=TRUE as env var solves the issue while invoking from console:

KMP_DUPLICATE_LIB_OK=TRUE python -c "import numpy;import torch;print('works')"                                         
works

Sometimes I get segfaults thought, this doesn't seem like a stable solution.

Versions

Latest Core nightly (20220525).

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions