Skip to content

Failed sanity test for GPU install v2.7.10+xpu (Windows/pip) with ModuleNotFoundError: No module named 'oneccl_bindings_for_pytorch' #820

Open
@rkilchmn

Description

@rkilchmn

Describe the bug

I installed GPU v2.7.10+xpu (Windows/pip) following the instructions.

Then I ran the sanity test:
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"

and go this import error: ModuleNotFoundError: No module named 'oneccl_bindings_for_pytorch'

python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];" [W430 12:16:51.000000000 OperatorEntry.cpp:161] Warning: Warning only once for all operators, other operators may also be overridden. Overriding a previously registered kernel for the same operator and the same dispatch key operator: aten::geometric_(Tensor(a!) self, float p, *, Generator? generator=None) -> Tensor(a!) registered at C:\actions-runner\_work\pytorch\pytorch\pytorch\build\aten\src\ATen\RegisterSchema.cpp:6 dispatch key: XPU previous kernel: registered at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:37 new kernel: registered at H:\frameworks.ai.pytorch.ipex-gpu\build\Release\csrc\gpu\csrc\gpu\xpu\ATen\RegisterXPU_0.cpp:186 (function operator ()) Traceback (most recent call last): File "<string>", line 1, in <module> File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\__init__.py", line 127, in <module> from . import xpu File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\xpu\__init__.py", line 20, in <module> from .utils import * File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\xpu\utils.py", line 6, in <module> from .. import frontend File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\frontend.py", line 9, in <module> from .nn import utils File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\nn\utils\__init__.py", line 1, in <module> from intel_extension_for_pytorch.nn.utils import _weight_prepack File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 121, in <module> from deepspeed import comm File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\__init__.py", line 25, in <module> from . import ops File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\ops\__init__.py", line 6, in <module> from . import adam File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\ops\adam\__init__.py", line 6, in <module> from .cpu_adam import DeepSpeedCPUAdam File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\ops\adam\cpu_adam.py", line 8, in <module> from deepspeed.utils import logger File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\utils\__init__.py", line 10, in <module> from .groups import * File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\utils\groups.py", line 28, in <module> from deepspeed import comm as dist File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\comm\__init__.py", line 7, in <module> from .comm import * File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\comm\comm.py", line 31, in <module> from deepspeed.comm.ccl import CCLBackend File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\comm\ccl.py", line 11, in <module> from deepspeed.ops.op_builder import NotImplementedBuilder File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\ops\op_builder\__init__.py", line 53, in <module> this_module.__dict__[member_name] = builder_closure(member_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\ops\op_builder\__init__.py", line 41, in builder_closure builder = get_accelerator().get_op_builder(member_name) ^^^^^^^^^^^^^^^^^ File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\accelerator\real_accelerator.py", line 186, in get_accelerator from .xpu_accelerator import XPU_Accelerator File "c:\Users\myuser\Documents\project\openedai-whisper-ipex-llm\.conda\Lib\site-packages\deepspeed\accelerator\xpu_accelerator.py", line 9, in <module> **import oneccl_bindings_for_pytorch** # noqa: F401 # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **ModuleNotFoundError: No module named 'oneccl_bindings_for_pytorch'**

Versions

PyTorch version: 2.7.0+xpu
PyTorch CXX11 ABI: No
IPEX version: N/A
IPEX commit: N/A
Build type: N/A

OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: N/A
Clang version: N/A
IGC version: N/A
CMake version: N/A
Libc version: N/A

Python version: 3.11.11 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:34:19) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is XPU available: N/A
DPCPP runtime: N/A
MKL version: N/A

GPU models and configuration onboard:

  • Intel(R) Iris(R) Xe Graphics

GPU models and configuration detected:
N/A

Driver version:

  • 32.0.101.5972 (20240819000000.***+)

CPU:
Description: Intel64 Family 6 Model 140 Stepping 1
Manufacturer: GenuineIntel
Name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
NumberOfCores: 4
NumberOfEnabledCore: 4
NumberOfLogicalProcessors: 8
ThreadCount: 8

Versions of relevant libraries:
[conda] deepspeed 0.15.0 pypi_0 pypi
[conda] dpcpp-cpp-rt 2025.0.5 pypi_0 pypi
[conda] intel-cmplr-lib-rt 2025.0.5 pypi_0 pypi
[conda] intel-cmplr-lib-ur 2025.0.5 pypi_0 pypi
[conda] intel-cmplr-lic-rt 2025.0.5 pypi_0 pypi
[conda] intel-extension-for-pytorch 2.7.10+xpu pypi_0 pypi
[conda] intel-extension-for-transformers 1.4.2 pypi_0 pypi
[conda] intel-opencl-rt 2025.0.5 pypi_0 pypi
[conda] intel-openmp 2025.0.5 pypi_0 pypi
[conda] intel-pti 0.10.1 pypi_0 pypi
[conda] intel-sycl-rt 2025.0.5 pypi_0 pypi
[conda] libuv 1.48.0 h827c3e9_0
[conda] mkl 2025.0.1 pypi_0 pypi
[conda] mkl-dpcpp 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] onemkl-sycl-blas 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-datafitting 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-dft 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-lapack 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-rng 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-sparse 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-stats 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-vm 2025.0.1 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+xpu pypi_0 pypi
[conda] torchaudio 2.7.0+xpu pypi_0 pypi
[conda] torchvision 0.22.0+xpu pypi_0 pypi
[conda] transformers 4.51.3 pypi_0 pypi

Activity

rkilchmn

rkilchmn commented on Apr 30, 2025

@rkilchmn
Author

In the code there is this additional linter instruction:

import oneccl_bindings_for_pytorch # noqa: F401 # type: ignore
Meaning: Ignore F401, which is: "module imported but unused"

Why is it included if not used?

ZhaoqiongZ

ZhaoqiongZ commented on Apr 30, 2025

@ZhaoqiongZ
Contributor

Hi @rkilchmn, thank you for trying this out. We don't support torch-ccl on Windows. It appears there is deepseek in your environment and deepSeek depends on torch-ccl.
You can have a try with OS Ubuntu or uninstall deepspeed

self-assigned this
on Apr 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    Failed sanity test for GPU install v2.7.10+xpu (Windows/pip) with ModuleNotFoundError: No module named 'oneccl_bindings_for_pytorch' · Issue #820 · intel/intel-extension-for-pytorch