Description
Describe the bug
While running the custom whisper model on Lunar Lake on Windows using IPEX - XPU, I am getting the following error:
"INFO:main:Extracted Whisper features with shape (1, 80, 3000).
INFO:main:Running inference.
INFO:main:Passing tensor POSITIonALLY to model (last resort).
ERROR:main:Inference error: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Traceback (most recent call last):
File "C:\Users\sdp\Desktop\latest\ipex_inference_test.py", line 191, in infer
outputs = self._model(input_tensor)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\onnx2pytorch\convert\model.py", line 224, in forward
activations[out_op_id] = op(*in_activations)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\conv.py", line 375, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\sdp\Desktop\latest\ipex_env\lib\site-packages\torch\nn\modules\conv.py", line 370, in _conv_forward
return F.conv1d(
RuntimeError: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor "
Versions
Using intel_extension_for_pytorch==2.6.10+xpu and python - 3.10
Activity
xiguiw commentedon May 14, 2025
@varshasrighakollapu
What data type of do you set for Inpud data and model?
From the log, it seems input data is type is (torch.FloatTensor) and weight type (XPUFloatType).
Could you set both input data and model weight as torch.float or torch.bfloat16?
xiguiw commentedon May 20, 2025
@varshasrighakollapu
Could you show the code to reproduce the issue?
it's necessary to move both input data and model to 'xpu'
Please refer to the example:
intel-extension-for-pytorch/examples/gpu/llm/inference/run_generation.py
Line 347 in 54509eb
intel-extension-for-pytorch/examples/gpu/llm/inference/run_generation.py
Line 205 in 54509eb