Skip to content

Commit eb387c5

Browse files
authored
update docs for 1.13.100 (#1379)
* update docs for 1.13.100 update blog list fine tune update version number in installation guide update getting started update dockerfiles update compilation instruction fine tune fine tune fine tune fine tune * add release notes
1 parent ef12c70 commit eb387c5

17 files changed

+187
-350
lines changed

docker/Dockerfile.conda

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,3 @@
1-
# syntax = docker/dockerfile:1
2-
# based onhttps://github.com/pytorch/pytorch/blob/master/Dockerfile
3-
#
41
# NOTE: To build this you will need a docker version >= 19.03 and DOCKER_BUILDKIT=1
52
#
63
# If you do not use buildkit you are not going to have a good time
@@ -41,13 +38,13 @@ RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Mini
4138
/opt/conda/bin/conda clean -ya
4239

4340
FROM dev-base AS build
44-
ARG IPEX_VERSION=v1.13.0
45-
ARG PYTORCH_VERSION=v1.13.0
46-
ARG TORCHVISION_VERSION=0.14.0+cpu
47-
ARG TORCHAUDIO_VERSION=0.13.0+cpu
41+
ARG IPEX_VERSION=v1.13.100+cpu
42+
ARG PYTORCH_VERSION=v1.13.1
43+
ARG TORCHVISION_VERSION=0.14.1
44+
ARG TORCHAUDIO_VERSION=0.13.1
4845
COPY --from=conda /opt/conda /opt/conda
4946
RUN --mount=type=cache,target=/opt/ccache \
50-
python -m pip install --no-cache-dir torch==${PYTORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION} torchaudio==${TORCHAUDIO_VERSION} -f https://download.pytorch.org/whl/torch_stable.html && \
47+
python -m pip install --no-cache-dir torch==${PYTORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu torchaudio==${TORCHAUDIO_VERSION}+cpu -f https://download.pytorch.org/whl/torch_stable.html && \
5148
git clone https://github.com/intel/intel-extension-for-pytorch && \
5249
cd intel-extension-for-pytorch && \
5350
git checkout ${IPEX_VERSION} && \

docker/Dockerfile.pip

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,3 @@
1-
# syntax = docker/dockerfile:1
2-
# based onhttps://github.com/pytorch/pytorch/blob/master/Dockerfile
3-
#
41
# NOTE: To build this you will need a docker version >= 19.03 and DOCKER_BUILDKIT=1
52
#
63
# If you do not use buildkit you are not going to have a good time
@@ -30,15 +27,14 @@ RUN ${PYTHON} -m pip --no-cache-dir install --upgrade \
3027
# Some TF tools expect a "python" binary
3128
RUN ln -s $(which ${PYTHON}) /usr/local/bin/python
3229

33-
ARG IPEX_VERSION=1.13.0
34-
ARG PYTORCH_VERSION=1.13.0+cpu
35-
ARG TORCHAUDIO_VERSION=0.13.0
36-
ARG TORCHVISION_VERSION=0.14.0+cpu
30+
ARG IPEX_VERSION=1.13.100
31+
ARG PYTORCH_VERSION=1.13.1
32+
ARG TORCHAUDIO_VERSION=0.13.1
33+
ARG TORCHVISION_VERSION=0.14.1
3734
ARG TORCH_CPU_URL=https://download.pytorch.org/whl/cpu/torch_stable.html
38-
ARG IPEX_URL=https://software.intel.com/ipex-whl-stable
3935

4036
RUN \
4137
python -m pip install --no-cache-dir \
42-
torch==${PYTORCH_VERSION} torchvision==${TORCHVISION_VERSION} torchaudio==${TORCHAUDIO_VERSION} -f ${TORCH_CPU_URL} && \
38+
torch==${PYTORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu torchaudio==${TORCHAUDIO_VERSION}+cpu -f ${TORCH_CPU_URL} && \
4339
python -m pip install --no-cache-dir \
44-
intel_extension_for_pytorch==${IPEX_VERSION} -f ${IPEX_URL}
40+
intel_extension_for_pytorch==${IPEX_VERSION}

docs/index.rst

Lines changed: 9 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -11,44 +11,33 @@ Intel® Extension for PyTorch* provides optimizations for both eager mode and gr
1111

1212
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
1313

14-
-------------------------------------
14+
Intel® Extension for PyTorch* is structured as shown in the following figure:
1515

16-
Intel® Extension for PyTorch* for CPU is structured as shown in the following figure:
17-
18-
.. figure:: ../images/intel_extension_for_pytorch_structure_cpu.png
16+
.. figure:: ../images/intel_extension_for_pytorch_structure.png
1917
:width: 800
2018
:align: center
21-
:alt: Structure of Intel® Extension for PyTorch* for CPU
22-
23-
24-
PyTorch components are depicted with white boxes while Intel Extensions are with blue boxes. Extra performance of the extension is delivered via both custom addons and overriding existing PyTorch components. In eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers and INT8 quantization API. Further performance boost is available by converting the eager-mode model into graph mode via the extended graph fusion passes. Intel® Extension for PyTorch* dispatches the operators into their underlying kernels automatically based on ISA that it detects and leverages vectorization and matrix acceleration units available in Intel hardware, as much as possible. oneDNN library is used for computation intensive operations. Intel Extension for PyTorch runtime extension brings better efficiency with finer-grained thread runtime control and weight sharing.
19+
:alt: Architecture of Intel® Extension for PyTorch*
2520

26-
Intel® Extension for PyTorch* for CPU has been released as an open–source project at `Github master branch <https://github.com/intel/intel-extension-for-pytorch/tree/master>`_. Check `CPU tutorial <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/>`_ for detailed information of Intel® Extension for PyTorch* for Intel® CPUs.
21+
|
2722
28-
-------------------------------------
23+
Optimizations for both eager mode and graph mode contribute to extra performance accelerations with the extension. In eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance boost is available by converting the eager-mode model into graph mode via extended graph fusion passes. In the graph mode, the fusions reduce operator/kernel invocation overheads, and thus increase performance. On CPU, Intel® Extension for PyTorch* dispatches the operators into their underlying kernels automatically based on ISA that it detects and leverages vectorization and matrix acceleration units available on Intel hardware. Intel® Extension for PyTorch* runtime extension brings better efficiency with finer-grained thread runtime control and weight sharing. On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory.
2924

30-
Intel® Extension for PyTorch* for GPU is structured as shown in the following figure:
25+
.. note:: GPU features are not included in CPU only packages.
3126

32-
.. figure:: ../images/intel_extension_for_pytorch_structure_gpu.svg
33-
:width: 800
34-
:align: center
35-
:alt: Architecture of Intel® Extension for PyTorch* for GPU
36-
37-
Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory. Intel® Extension for PyTorch* also integrates `oneDNN <https://github.com/oneapi-src/oneDNN>`_ and `oneMKL <https://github.com/oneapi-src/oneMKL>`_ libraries and provides kernels based on that. The oneDNN library is used for computation intensive operations. The oneMKL library is used for fundamental mathematical operations.
38-
39-
Intel® Extension for PyTorch* for GPU has been released as an open–source project on `GitHub xpu-master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_. Check `GPU tutorial <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/>`_ for detailed information of Intel® Extension for PyTorch* for Intel® GPUs.
27+
Intel® Extension for PyTorch* has been released as an open–source project at `Github <https://github.com/intel/intel-extension-for-pytorch>`_. Source code is available at `xpu-master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_. Check `the tutorial <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/>`_ for detailed information. Due to different development schedule, optimizations for CPU only might have a newer code base. Source code is available at `master branch <https://github.com/intel/intel-extension-for-pytorch/tree/master>`_. Check `the CPU tutorial <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/>`_ for detailed information on the CPU side.
4028

4129
.. toctree::
4230
:hidden:
4331
:maxdepth: 1
4432

33+
tutorials/getting_started
4534
tutorials/features
4635
tutorials/releases
4736
tutorials/installation
4837
tutorials/examples
49-
tutorials/performance
5038
tutorials/api_doc
5139
tutorials/performance_tuning
40+
tutorials/performance
5241
tutorials/blogs_publications
5342
tutorials/contribution
5443
tutorials/license

docs/tutorials/api_doc.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,14 @@ General
66

77
.. currentmodule:: intel_extension_for_pytorch
88
.. autofunction:: optimize
9-
.. autofunction:: enable_onednn_fusion
109
.. autoclass:: verbose
1110

11+
Graph Optimization
12+
******************
13+
14+
.. currentmodule:: intel_extension_for_pytorch
15+
.. autofunction:: enable_onednn_fusion
16+
1217
Quantization
1318
************
1419

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,34 @@
11
Blogs & Publications
22
====================
33

4-
* [Accelerating PyTorch with Intel® Extension for PyTorch\*](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722)
5-
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html)
6-
* [Accelerate PyTorch with the extension and oneDNN using Intel BF16 Technology](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
7-
* *Note*: APIs mentioned in it are deprecated.
8-
* [Scaling up BERT-like model Inference on modern CPU - Part 1 by the launcher of the extension](https://huggingface.co/blog/bert-cpu-scaling-part-1)
9-
* [KT Optimizes Performance for Personalized Text-to-Speech](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757)
4+
* [Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023](https://huggingface.co/blog/intel-sapphire-rapids)
5+
* [Intel® Deep Learning Boost - Improve Inference Performance of BERT Base Model from Hugging Face for Network Security Technology Guide, Jan 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-improve-inference-performance-of-bert-base-model-from-hugging-face-for-network-security-technology-guide)
6+
* [Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=066_Jd6cwZg)
7+
* [What is New in Intel Extension for PyTorch, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=SE56wFXdvP4&t=1s)
8+
* [Accelerating PyG on Intel CPUs, Dec 2022](https://www.pyg.org/ns-newsarticle-accelerating-pyg-on-intel-cpus)
9+
* [Accelerating PyTorch Deep Learning Models on Intel XPUs, Dec, 2022](https://www.oneapi.io/event-sessions/accelerating-pytorch-deep-learning-models-on-intel-xpus-2-ai-hpc-2022/)
10+
* [Introducing the Intel® Extension for PyTorch\* for GPUs, Dec 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/introducing-intel-extension-for-pytorch-for-gpus.html)
11+
* [PyTorch Stable Diffusion Using Hugging Face and Intel Arc, Nov 2022](https://towardsdatascience.com/pytorch-stable-diffusion-using-hugging-face-and-intel-arc-77010e9eead6)
12+
* [PyTorch 1.13: New Potential for AI Developers to Enhance Model Performance and Accuracy, Nov 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/pytorch-1-13-new-potential-for-ai-developers.html)
13+
* [Easy Quantization in PyTorch Using Fine-Grained FX, Sep 2022](https://medium.com/intel-analytics-software/easy-quantization-in-pytorch-using-fine-grained-fx-80be2c4bc2d6)
14+
* [Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16, Aug 2022](https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/)
15+
* [Accelerating PyTorch Vision Models with Channels Last on CPU, Aug 2022](https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/)
16+
* [One-Click Enabling of Intel Neural Compressor Features in PyTorch Scripts, Aug 2022](https://medium.com/intel-analytics-software/one-click-enable-intel-neural-compressor-features-in-pytorch-scripts-5d4e31f5a22b)
17+
* [Increase PyTorch Inference Throughput by 4x, Jul 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/increase-pytorch-inference-throughput-by-4x.html)
18+
* [PyTorch Inference Acceleration with Intel® Neural Compressor, Jun 2022](https://medium.com/pytorch/pytorch-inference-acceleration-with-intel-neural-compressor-842ef4210d7d)
19+
* [Accelerating PyTorch with Intel® Extension for PyTorch, May 2022](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722)
20+
* [Grokking PyTorch Intel CPU performance from first principles (parts 1), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html)
21+
* [Grokking PyTorch Intel CPU performance from first principles (parts 2), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex_2.html)
22+
* [Grokking PyTorch Intel CPU performance from first principles, Apr 2022](https://medium.com/pytorch/grokking-pytorch-intel-cpu-performance-from-first-principles-7e39694412db)
23+
* [KT Optimizes Performance for Personalized Text-to-Speech, Nov 2021](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757)
24+
* [Accelerating PyTorch distributed fine-tuning with Intel technologies, Nov 2021](https://huggingface.co/blog/accelerating-pytorch)
25+
* [Scaling up BERT-like model Inference on modern CPU - parts 1, Apr 2021](https://huggingface.co/blog/bert-cpu-scaling-part-1)
26+
* [Scaling up BERT-like model Inference on modern CPU - parts 2, Nov 2021](https://huggingface.co/blog/bert-cpu-scaling-part-2)
27+
* [NAVER: Low-Latency Machine-Learning Inference](https://www.intel.com/content/www/us/en/customer-spotlight/stories/naver-ocr-customer-story.html)
28+
* [Intel® Extensions for PyTorch, Feb 2021](https://pytorch.org/tutorials/recipes/recipes/intel_extension_for_pytorch.html)
29+
* [Optimizing DLRM by using PyTorch with oneCCL Backend, Feb 2021](https://pytorch.medium.com/optimizing-dlrm-by-using-pytorch-with-oneccl-backend-9f85b8ef6929)
30+
* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology, Feb 2021](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
31+
*Note*: APIs mentioned in it are deprecated.
32+
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability, Jun 2020](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)
33+
* [Intel and Facebook\* collaborate to boost PyTorch\* CPU performance, Apr 2019](https://www.intel.com/content/www/us/en/developer/articles/case-study/intel-and-facebook-collaborate-to-boost-pytorch-cpu-performance.html)
34+
* [Intel and Facebook\* Collaborate to Boost Caffe\*2 Performance on Intel CPU’s, Apr 2017](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-and-facebook-collaborate-to-boost-caffe2-performance-on-intel-cpu-s.html)

docs/tutorials/examples.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ Examples
33

44
**_NOTE:_** Check individual feature page for examples of feature usage. All features are listed in the [feature page](./features.rst).
55

6+
**_NOTE:_** Feature examples and examples below are available at Github source tree, under `examples` directory.
7+
68
## Training
79

810
### Single-instance Training

0 commit comments

Comments
 (0)