Skip to content

Commit a28fe4f

Browse files
ZhaoqiongZjingxu101pikachutye1zhuyuhua-v
authored
Update release note and known issues (#4749)
* remove basekit activate & add OCL_ICD_VENDORS setting * add python execution example * update CCL_ROOT setting with multi-gpu usage * update compile bundle bat branch to release tag * update example in getting started * correct version of intel-level-zero-gpu * remove deprecated api * set version of accelerate for finetune * fix model name * Update deepspeed in requirements.txt * update release note and known issue * update ipex.llm.optimize * update IPEX_LOG --------- Co-authored-by: Jing Xu <[email protected]> Co-authored-by: jundu <[email protected]> Co-authored-by: Ye Ting <[email protected]> Co-authored-by: zhuyuhua-v <[email protected]>
1 parent fb5fce5 commit a28fe4f

23 files changed

+175
-135
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The extension can be loaded as a Python module for Python programs or linked as
2323

2424
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch\*. Check [LLM optimizations CPU](./examples/cpu/inference/python/llm) and [LLM optimizations GPU](./examples/gpu/llm) for details.
2525

26-
### Optimized Model List
26+
### Validated Model List
2727

2828
#### LLM Inference
2929

csrc/gpu/utils/LogImpl.cpp

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ spdlog::level::level_enum get_log_level_from_int(int level) {
1515
return spdlog::level::critical;
1616
} else {
1717
throw std::runtime_error(
18-
"USING error log level for IPEX_LOGGING, log level should be -1 to 5, but met " +
18+
"USING error log level for IPEX_LOG, log level should be -1 to 5, but met " +
1919
std::string{level});
2020
}
2121
}
@@ -231,7 +231,7 @@ void EventLogger::print_result(int log_level) {
231231
if (this->message_queue.size() >= 2) {
232232
auto next_time = this->message_queue.front().timestamp;
233233
auto next_step = this->message_queue.front().step_id;
234-
// inside IPEX_LOGGING we are using nanoseconds, 1ns = 0.001us, cast to us
234+
// inside IPEX_LOG we are using nanoseconds, 1ns = 0.001us, cast to us
235235
// here
236236
auto time_step = static_cast<float>((next_time - this_time) / 1000);
237237
log_result_with_args(
@@ -283,3 +283,4 @@ void BasicLogger::update_logger() {
283283
logger->set_pattern("[%c %z] [%l] [thread %t] %v");
284284
spdlog::set_default_logger(logger);
285285
}
286+

docs/tutorials/api_doc.rst

Lines changed: 13 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -6,66 +6,12 @@ General
66

77
.. currentmodule:: intel_extension_for_pytorch
88
.. autofunction:: optimize
9-
.. autofunction:: optimize_transformers
9+
.. currentmodule:: intel_extension_for_pytorch.llm
10+
.. autofunction:: optimize
11+
.. currentmodule:: intel_extension_for_pytorch
1012
.. autofunction:: get_fp32_math_mode
1113
.. autofunction:: set_fp32_math_mode
1214

13-
14-
Miscellaneous
15-
=============
16-
17-
.. currentmodule:: intel_extension_for_pytorch.xpu
18-
.. StreamContext
19-
.. can_device_access_peer
20-
.. current_blas_handle
21-
.. autofunction:: current_device
22-
.. autofunction:: current_stream
23-
.. default_stream
24-
.. autoclass:: device
25-
.. autofunction:: device_count
26-
.. autoclass:: device_of
27-
.. autofunction:: get_device_name
28-
.. autofunction:: get_device_properties
29-
.. get_gencode_flags
30-
.. get_sync_debug_mode
31-
.. autofunction:: init
32-
.. ipc_collect
33-
.. autofunction:: is_available
34-
.. autofunction:: is_initialized
35-
.. memory_usage
36-
.. autofunction:: set_device
37-
.. set_stream
38-
.. autofunction:: stream
39-
.. autofunction:: synchronize
40-
41-
.. currentmodule:: intel_extension_for_pytorch.xpu.fp8.fp8
42-
.. autofunction:: fp8_autocast
43-
44-
45-
Random Number Generator
46-
=======================
47-
48-
.. currentmodule:: intel_extension_for_pytorch.xpu
49-
.. autofunction:: get_rng_state
50-
.. autofunction:: get_rng_state_all
51-
.. autofunction:: set_rng_state
52-
.. autofunction:: set_rng_state_all
53-
.. autofunction:: manual_seed
54-
.. autofunction:: manual_seed_all
55-
.. autofunction:: seed
56-
.. autofunction:: seed_all
57-
.. autofunction:: initial_seed
58-
59-
Streams and events
60-
==================
61-
62-
.. currentmodule:: intel_extension_for_pytorch.xpu
63-
.. autoclass:: Stream
64-
:members:
65-
.. ExternalStream
66-
.. autoclass:: Event
67-
:members:
68-
6915
Memory management
7016
=================
7117

@@ -92,9 +38,17 @@ Memory management
9238
.. autofunction:: memory_stats_as_nested_dict
9339
.. autofunction:: reset_accumulated_memory_stats
9440

41+
42+
Quantization
43+
============
44+
45+
.. currentmodule:: intel_extension_for_pytorch.quantization.fp8
46+
.. autofunction:: fp8_autocast
47+
48+
9549
C++ API
9650
=======
9751

98-
.. doxygenenum:: xpu::FP32_MATH_MODE
52+
.. doxygenenum:: torch_ipex::xpu::FP32_MATH_MODE
9953

100-
.. doxygenfunction:: xpu::set_fp32_math_mode
54+
.. doxygenfunction:: torch_ipex::xpu::set_fp32_math_mode

docs/tutorials/features.rst

Lines changed: 5 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -137,19 +137,6 @@ For more detailed information, check `torch.compile for GPU <features/torch_comp
137137

138138
features/torch_compile_gpu
139139

140-
Simple Trace Tool (Prototype)
141-
-----------------------------
142-
143-
Simple Trace is a built-in debugging tool that lets you control printing out the call stack for a piece of code. Once enabled, it can automatically print out verbose messages of called operators in a stack format with indenting to distinguish the context.
144-
145-
For more detailed information, check `Simple Trace Tool <features/simple_trace.md>`_.
146-
147-
.. toctree::
148-
:hidden:
149-
:maxdepth: 1
150-
151-
features/simple_trace
152-
153140
Kineto Supported Profiler Tool (Prototype)
154141
------------------------------------------
155142

@@ -178,13 +165,13 @@ For more detailed information, check `Compute Engine <features/compute_engine.md
178165
features/compute_engine
179166

180167

181-
``IPEX_LOGGING`` (Prototype feature for debug)
182-
----------------------------------------------
168+
``IPEX_LOG`` (Prototype feature for debug)
169+
------------------------------------------
183170

184171

185-
``IPEX_LOGGING`` provides the capability to log verbose information from Intel® Extension for PyTorch\* . Please use ``IPEX_LOGGING`` to get the log information or trace the execution from Intel® Extension for PyTorch\*. Please continue using PyTorch\* macros such as ``TORCH_CHECK``, ``TORCH_ERROR``, etc. to get the log information from PyTorch\*.
172+
``IPEX_LOG`` provides the capability to log verbose information from Intel® Extension for PyTorch\* . Please use ``IPEX_LOG`` to get the log information or trace the execution from Intel® Extension for PyTorch\*. Please continue using PyTorch\* macros such as ``TORCH_CHECK``, ``TORCH_ERROR``, etc. to get the log information from PyTorch\*.
186173

187-
For more detailed information, check `IPEX_LOGGING <features/ipex_log.md>`_.
174+
For more detailed information, check `IPEX_LOG <features/ipex_log.md>`_.
188175

189176
.. toctree::
190177
:hidden:
@@ -193,3 +180,4 @@ For more detailed information, check `IPEX_LOGGING <features/ipex_log.md>`_.
193180
features/ipex_log
194181

195182

183+

docs/tutorials/features/ipex_log.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
`IPEX_LOGGING` (Prototype)
1+
`IPEX_LOG` (Prototype)
22
==========================
33

44
## Introduction
55

6-
`IPEX_LOGGING` provides the capability to log verbose information from Intel® Extension for PyTorch\* . Please use `IPEX_LOGGING` to get the log information or trace the execution from Intel® Extension for PyTorch\*. Please continue using PyTorch\* macros such as `TORCH_CHECK`, `TORCH_ERROR`, etc. to get the log information from PyTorch\*.
6+
`IPEX_LOG` provides the capability to log verbose information from Intel® Extension for PyTorch\* . Please use `IPEX_LOG` to get the log information or trace the execution from Intel® Extension for PyTorch\*. Please continue using PyTorch\* macros such as `TORCH_CHECK`, `TORCH_ERROR`, etc. to get the log information from PyTorch\*.
77

8-
## `IPEX_LOGGING` Definition
8+
## `IPEX_LOG` Definition
99
### Log Level
1010
The supported log levels are defined as follows, default log level is `DISABLED`:
1111

@@ -81,3 +81,4 @@ Use `torch.xpu.set_log_level(0)` to get logs to replace the previous usage in `I
8181

8282
## Replace `IPEX_VERBOSE`
8383
Use `torch.xpu.set_log_level(1)` to get logs to replace the previous usage in `IPEX_VERBOSE`.
84+

docs/tutorials/features/simple_trace.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Simple Trace Tool (Prototype)
2-
=============================
1+
Simple Trace Tool (Deprecated)
2+
==============================
33

44
## Introduction
55

docs/tutorials/getting_started.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,9 @@ model = ipex.optimize(model, dtype=dtype)
3232
########## FP32 ############
3333
with torch.no_grad():
3434
####### BF16 on CPU ########
35-
with torch.no_grad(), with torch.cpu.amp.autocast():
35+
with torch.no_grad(), torch.cpu.amp.autocast():
3636
##### BF16/FP16 on GPU #####
37-
with torch.no_grad(), with torch.xpu.amp.autocast(enabled=True, dtype=dtype, cache_enabled=False):
37+
with torch.no_grad(), torch.xpu.amp.autocast(enabled=True, dtype=dtype, cache_enabled=False):
3838
############################
3939
###### Torchscript #######
4040
model = torch.jit.trace(model, data)
@@ -49,13 +49,14 @@ More examples, including training and usage of low precision data types are avai
4949

5050
## Execution
5151

52-
Execution requires an active Intel® oneAPI environment. Suppose you have the Intel® oneAPI Base Toolkit installed in `/opt/intel/oneapi` directory, activating the environment is as simple as sourcing its environment activation bash scripts.
53-
5452
There are some environment variables in runtime that can be used to configure executions on GPU. Please check [Advanced Configuration](./features/advanced_configuration.html#runtime-configuration) for more detailed information.
5553

54+
Set `OCL_ICD_VENDORS` with default path `/etc/OpenCL/vendors`.
55+
Set `CCL_ROOT` if you are using multi-GPU.
56+
5657
```bash
57-
source /opt/intel/oneapi/compiler/latest/env/vars.sh
58-
source /opt/intel/oneapi/mkl/latest/env/vars.sh
58+
export OCL_ICD_VENDORS=/etc/OpenCL/vendors
59+
export CCL_ROOT=${CONDA_PREFIX}
5960
python <script>
6061
```
6162

docs/tutorials/known_issues.md

Lines changed: 31 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,44 +4,40 @@ Troubleshooting
44
## General Usage
55

66
- **Problem**: FP64 data type is unsupported on current platform.
7-
- **Cause**: FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) and [Intel® Arc™ A-Series Graphics](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/arc.html) platforms.
7+
- **Cause**: FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) and [Intel® Arc™ A-Series Graphics](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/arc.html) platforms.
88
If you run any AI workload on that platform and receive this error message, it means a kernel requires FP64 instructions that are not supported and the execution is stopped.
9-
- **Problem**: Runtime error `invalid device pointer` if `import horovod.torch as hvd` before `import intel_extension_for_pytorch`
9+
- **Problem**: Runtime error `invalid device pointer` if `import horovod.torch as hvd` before `import intel_extension_for_pytorch`.
1010
- **Cause**: Intel® Optimization for Horovod\* uses utilities provided by Intel® Extension for PyTorch\*. The improper import order causes Intel® Extension for PyTorch\* to be unloaded before Intel®
1111
Optimization for Horovod\* at the end of the execution and triggers this error.
1212
- **Solution**: Do `import intel_extension_for_pytorch` before `import horovod.torch as hvd`.
1313
- **Problem**: Number of dpcpp devices should be greater than zero.
14-
- **Cause**: If you use Intel® Extension for PyTorch* in a conda environment, you might encounter this error. Conda also ships the libstdc++.so dynamic library file that may conflict with the one shipped
15-
in the OS.
14+
- **Cause**: If you use Intel® Extension for PyTorch\* in a conda environment, you might encounter this error. Conda also ships the libstdc++.so dynamic library file that may conflict with the one shipped
15+
in the OS.
1616
- **Solution**: Export the `libstdc++.so` file path in the OS to an environment variable `LD_PRELOAD`.
1717
- **Problem**: Symbol undefined caused by `_GLIBCXX_USE_CXX11_ABI`.
1818
```bash
1919
ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
2020
```
2121
- **Cause**: DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is
2222
compiled with `_GLIBCXX_USE_CXX11_ABI=0`.
23-
- **Solution**: Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels
23+
- **Solution**: Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels
2424
in [download server](https:// developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
25-
- **Problem**: Bad termination after AI model execution finishes when using Intel MPI.
26-
- **Cause**: This is a random issue when the AI model (e.g. RN50 training) execution finishes in an Intel MPI environment. It is not user-friendly as the model execution ends ungracefully. It has been fixed in PyTorch* 2.3 ([#116312](https://github.com/pytorch/pytorch/commit/f657b2b1f8f35aa6ee199c4690d38a2b460387ae)).
27-
- **Solution**: Add `dist.destroy_process_group()` during the cleanup stage in the model script, as described
28-
in [Getting Started with Distributed Data Parallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html), before Intel® Extension for PyTorch* supports PyTorch* 2.3.
2925
- **Problem**: `-997 runtime error` when running some AI models on Intel® Arc™ A-Series GPUs.
30-
- **Cause**: Some of the `-997 runtime error` are actually out-of-memory errors. As Intel® Arc™ A-Series GPUs have less device memory than Intel® Data Center GPU Flex Series 170 and Intel® Data Center GPU
26+
- **Cause**: Some of the `-997 runtime error` are actually out-of-memory errors. As Intel® Arc™ A-Series GPUs have less device memory than Intel® Data Center GPU Flex Series 170 and Intel® Data Center GPU
3127
Max Series, running some AI models on them may trigger out-of-memory errors and cause them to report failure such as `-997 runtime error` most likely. This is expected. Memory usage optimization is a work in progress to allow Intel® Arc™ A-Series GPUs to support more AI models.
3228
- **Problem**: Building from source for Intel® Arc™ A-Series GPUs fails on WSL2 without any error thrown.
3329
- **Cause**: Your system probably does not have enough RAM, so Linux kernel's Out-of-memory killer was invoked. You can verify this by running `dmesg` on bash (WSL2 terminal).
34-
- **Solution**: If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment
30+
- **Solution**: If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment
3531
variable `MAX_JOBS` (by default, it's equal to the number of logical CPU cores. So, setting `MAX_JOBS` to 1 is a very conservative approach that would slow things down a lot).
3632
- **Problem**: Some workloads terminate with an error `CL_DEVICE_NOT_FOUND` after some time on WSL2.
3733
- **Cause**: This issue is due to the [TDR feature](https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys#tdrdelay) on Windows.
3834
- **Solution**: Try increasing TDRDelay in your Windows Registry to a large value, such as 20 (it is 2 seconds, by default), and reboot.
3935
- **Problem**: Random bad termination after AI model convergence test (>24 hours) finishes.
4036
- **Cause**: This is a random issue when some AI model convergence test execution finishes. It is not user-friendly as the model execution ends ungracefully.
4137
- **Solution**: Kill the process after the convergence test finished, or use checkpoints to divide the convergence test into several phases and execute separately.
42-
- **Problem**: Random instability issues such as page fault or atomic access violation when executing LLM inference workloads on Intel® Data Center GPU Max series cards.
43-
- **Cause**: This issue is reported on LTS driver [803.29](https://dgpu-docs.intel.com/releases/LTS_803.29_20240131.html). The root cause is under investigation.
44-
- **Solution**: Use active rolling stable release driver [775.20](https://dgpu-docs.intel.com/releases/stable_775_20_20231219.html) or latest driver version to workaround.
38+
- **Problem**: Runtime error `munmap_chunk(): invalid pointer` when executing some scaling LLM workloads on Intel® Data Center GPU Max Series platform
39+
- **Cause**: Users targeting GPU use, must set the environment variable ‘FI_HMEM=system’ to disable GPU support in underlying libfabric as Intel® MPI Library 2021.13.1 will offload the GPU support instead. This avoids a potential bug in libfabric GPU initialization.
40+
- **Solution**: Set the environment variable ‘FI_HMEM=system’ to workaround this issue when encounter.
4541
4642
## Library Dependencies
4743
@@ -54,7 +50,7 @@ Troubleshooting
5450
/usr/bin/ld: cannot find -lmkl_tbb_thread
5551
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
5652
```
57-
53+
5854
- **Cause**: When PyTorch\* is built with oneMKL library and Intel® Extension for PyTorch\* is built without MKL library, this linker issue may occur.
5955
- **Solution**: Resolve the issue by setting:
6056

@@ -66,8 +62,8 @@ Troubleshooting
6662
Then clean build Intel® Extension for PyTorch\*.
6763

6864
- **Problem**: Undefined symbol: `mkl_lapack_dspevd`. Intel MKL FATAL ERROR: cannot load `libmkl_vml_avx512.so.2` or `libmkl_vml_def.so.2.
69-
- **Cause**: This issue may occur when Intel® Extension for PyTorch\* is built with oneMKL library and PyTorch\* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly
70-
and trigger this issue.
65+
- **Cause**: This issue may occur when Intel® Extension for PyTorch\* is built with oneMKL library and PyTorch\* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly
66+
and trigger this issue.
7167
- **Solution**: Resolve the issue by installing the oneMKL library from conda:
7268
7369
```bash
@@ -87,14 +83,30 @@ Troubleshooting
8783

8884
If you continue seeing similar issues for other shared object files, add the corresponding files under `${MKL_DPCPP_ROOT}/lib/intel64/` by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
8985

86+
- **Problem**: RuntimeError: could not create an engine.
87+
- **Cause**: `OCL_ICD_VENDORS` path is wrongly set when activate a exist conda environment.
88+
- **Solution**: `export OCL_ICD_VENDORS=/etc/OpenCL/vendors` after `conda activate`
89+
90+
- **Problem**: If you encounter issues related to CCL environment variable configuration when running distributed tasks.
91+
- **Cause**: `CCL_ROOT` path is wrongly set.
92+
- **Solution**: `export CCL_ROOT=${CONDA_PREFIX}`
9093

91-
## Performance Issue
94+
- **Problem**: If you encounter issues related to MPI environment variable configuration when running distributed tasks.
95+
- **Cause**: MPI environment variable configuration not correct.
96+
- **Solution**: `conda deactivate` and then `conda activate` to activate the correct MPI environment variable automatically.
97+
98+
```
99+
conda deactivate
100+
conda activate
101+
export OCL_ICD_VENDORS=/etc/OpenCL/vendors
102+
```
103+
104+
## Performance Issue
92105

93106
- **Problem**: Extended durations for data transfers from the host system to the device (H2D) and from the device back to the host system (D2H).
94107
- **Cause**: Absence of certain Dynamic Kernel Module Support (DKMS) packages on Ubuntu 22.04 or earlier versions.
95108
- **Solution**: For those running Ubuntu 22.04 or below, it's crucial to follow all the recommended installation procedures, including those labeled as [optional](https://dgpu-docs.intel.com/driver/client/overview.html#optional-out-of-tree-kernel-mode-driver-install). These steps are likely necessary to install the missing DKMS packages and ensure your system is functioning optimally. The Kernel Mode Driver (KMD) package that addresses this issue has been integrated into the Linux kernel for Ubuntu 23.04 and subsequent releases.
96109
97-
98110
## Unit Test
99111
100112
- Unit test failures on Intel® Data Center GPU Flex Series 170

0 commit comments

Comments
 (0)