Skip to content

Commit 2562a35

Browse files
authored
Jingxu10/readme 113 (#1275)
* update readme for gpu propagation update readme for gpu propagation * remove comments from docs/index.rst
1 parent 8bf15a2 commit 2562a35

File tree

2 files changed

+10
-11
lines changed

2 files changed

+10
-11
lines changed

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
# Intel® Extension for PyTorch\*
22

3-
Intel® Extension for PyTorch\* extends PyTorch with up-to-date features optimizations for an extra performance boost on Intel hardware. Example optimizations use AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX). Over time, most of these optimizations will be included directly into stock PyTorch releases. More importantly, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel® discrete graphics cards with PyTorch\*.
3+
Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
44

5-
Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch normally yields better performance from optimization techniques such as operation fusion, and Intel® Extension for PyTorch\* amplified them with more comprehensive graph optimizations. Therefore we recommended you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice. On Intel® graphics cards, through registering feature implementations into PyTorch\* as torch.xpu, PyTorch\* scripts work on Intel® discrete graphics cards.
5+
Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
66

77
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
88

9-
More detailed tutorials are available at **Intel® Extension for PyTorch\* online document website**. Both [CPU version](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) and [XPU/GPU version](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) are available.
9+
More detailed tutorials are available at **Intel® Extension for PyTorch\* [online document website](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)**.
10+
11+
**Note**: Check [here](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed tutorials of Intel® Extension for PyTorch\* for Intel® GPUs. Source code are available at the [xpu-master branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master).
1012

1113
## Installation
1214

docs/index.rst

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,19 @@
1-
.. Sphinx Test documentation master file, created by
2-
sphinx-quickstart on Sun Sep 19 17:30:45 2021.
3-
You can adapt this file completely to your liking, but it should at least
4-
contain the root `toctree` directive.
5-
61
.. meta::
72
:description: This website introduces Intel® Extension for PyTorch*
83
:keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch*
94

105
Welcome to Intel® Extension for PyTorch* Documentation
116
######################################################
127

13-
Intel® Extension for PyTorch* extends PyTorch with up-to-date features optimizations for an extra performance boost on Intel hardware. Example optimizations use AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX). Over time, most of these optimizations will be included directly into stock PyTorch releases.
8+
Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X\ :sup:`e`\ Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.
149

15-
Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch normally yields better performance from optimization techniques such as operation fusion, and Intel® Extension for PyTorch* amplified them with more comprehensive graph optimizations. Therefore we recommended you to take advantage of Intel® Extension for PyTorch* with `TorchScript <https://pytorch.org/docs/stable/jit.html>`_ whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice. More detailed information can be found at `pytorch.org website <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules>`_.
10+
Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with `TorchScript <https://pytorch.org/docs/stable/jit.html>`_ whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
1611

1712
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
1813

19-
Intel® Extension for PyTorch* is structured as shown in the following figure:
14+
**Note**: Check `here <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/>`_ for detailed tutorials of Intel® Extension for PyTorch* for Intel® GPUs. Source code are available at the `xpu-master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_.
15+
16+
Intel® Extension for PyTorch* for CPU is structured as shown in the following figure:
2017

2118
.. figure:: ../images/intel_extension_for_pytorch_structure.png
2219
:width: 800

0 commit comments

Comments
 (0)