Skip to content

Commit 216d20b

Browse files
r2.8 doc updates (#3757)
Co-authored-by: Chunyuan WU <[email protected]>
1 parent f050f1a commit 216d20b

File tree

10 files changed

+22
-26
lines changed

10 files changed

+22
-26
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Intel® Extension for PyTorch\*
55

66
</div>
77

8-
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.7.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm) <br>
8+
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.8.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm) <br>
99
**GPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main/examples/gpu/llm)<br>
1010

1111
Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.

docker/Dockerfile.prebuilt

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,10 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 100
3535

3636
WORKDIR /root
3737

38-
ARG IPEX_VERSION=2.7.0
39-
ARG TORCHCCL_VERSION=2.7.0
40-
ARG PYTORCH_VERSION=2.7.0
41-
ARG TORCHAUDIO_VERSION=2.7.0
42-
ARG TORCHVISION_VERSION=0.22.0
38+
ARG IPEX_VERSION=2.8.0
39+
ARG PYTORCH_VERSION=2.8.0
40+
ARG TORCHAUDIO_VERSION=2.8.0
41+
ARG TORCHVISION_VERSION=0.23.0
4342
RUN python -m venv venv && \
4443
. ./venv/bin/activate && \
4544
python -m pip --no-cache-dir install --upgrade \
@@ -49,7 +48,7 @@ RUN python -m venv venv && \
4948
python -m pip install --no-cache-dir \
5049
torch==${PYTORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu torchaudio==${TORCHAUDIO_VERSION}+cpu --index-url https://download.pytorch.org/whl/cpu && \
5150
python -m pip install --no-cache-dir \
52-
intel_extension_for_pytorch==${IPEX_VERSION} oneccl_bind_pt==${TORCHCCL_VERSION} --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/ && \
51+
intel_extension_for_pytorch==${IPEX_VERSION} --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/ && \
5352
python -m pip install intel-openmp && \
5453
python -m pip cache purge
5554

@@ -67,7 +66,6 @@ RUN ENTRYPOINT=/usr/local/bin/entrypoint.sh && \
6766
echo "CMD=\"\"; for i in \${!CMDS[@]}; do CMD=\"\${CMD} \${CMDS[\$i]}\"; done;" >> ${ENTRYPOINT} && \
6867
echo ". ~/venv/bin/activate" >> ${ENTRYPOINT} && \
6968
echo "TMP=\$(python -c \"import torch; import os; print(os.path.abspath(os.path.dirname(torch.__file__)))\")" >> ${ENTRYPOINT} && \
70-
echo ". \${TMP}/../oneccl_bindings_for_pytorch/env/setvars.sh" >> ${ENTRYPOINT} && \
7169
echo "echo \"**Note:** For better performance, please consider to launch workloads with command 'ipexrun'.\"" >> ${ENTRYPOINT} && \
7270
echo "exec \${CMD}" >> ${ENTRYPOINT} && \
7371
chmod +x ${ENTRYPOINT}

docker/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010

1111
```console
1212
$ cd $DOCKERFILE_DIR
13-
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:main .
13+
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:2.8.0 .
1414
```
1515

1616
Run the following commands to build a `conda` based container with Intel® Extension for PyTorch\* compiled from source:
@@ -20,14 +20,14 @@
2020
$ cd intel-extension-for-pytorch
2121
$ git submodule sync
2222
$ git submodule update --init --recursive
23-
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:main .
23+
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:2.8.0 .
2424
```
2525

2626
* Sanity Test
2727

2828
When a docker image is built out, Run the command below to launch into a container:
2929
```console
30-
$ docker run --rm -it intel-extension-for-pytorch:main bash
30+
$ docker run --rm -it intel-extension-for-pytorch:2.8.0 bash
3131
```
3232

3333
Then run the command below inside the container to verify correct installation.

docs/tutorials/getting_started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Quick Start
22

3-
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=cpu&version=v2.7.0%2Bcpu).
3+
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=cpu&version=v2.8.0%2Bcpu).
44

55
To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:
66

@@ -157,4 +157,4 @@ with torch.inference_mode(), torch.cpu.amp.autocast(enabled=amp_enabled):
157157
print(gen_text, total_new_tokens, flush=True)
158158
```
159159

160-
More LLM examples, including usage of low precision data types are available in the [LLM Examples](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm) section.
160+
More LLM examples, including usage of low precision data types are available in the [LLM Examples](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm) section.

docs/tutorials/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
Installation
22
============
33

4-
Select your preferences and follow the installation instructions provided on the [Installation page](../../../index.html#installation?platform=cpu&version=v2.7.0%2Bcpu).
4+
Select your preferences and follow the installation instructions provided on the [Installation page](../../../index.html#installation?platform=cpu&version=v2.8.0%2Bcpu).
55

66
After successful installation, refer to the [Quick Start](getting_started.md) and [Examples](examples.md) sections to start using the extension in your code.
77

8-
**NOTE:** For detailed instructions on installing and setting up the environment for Large Language Models (LLM), as well as example scripts, refer to the [LLM best practices](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm).
8+
**NOTE:** For detailed instructions on installing and setting up the environment for Large Language Models (LLM), as well as example scripts, refer to the [LLM best practices](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm).

docs/tutorials/introduction.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ the `Large Language Models (LLM) <llm.html>`_ section.
1616

1717
Get Started
1818
-----------
19-
- `Installation <../../../index.html#installation?platform=cpu&version=v2.7.0%2Bcpu>`_
19+
- `Installation <../../../index.html#installation?platform=cpu&version=v2.8.0%2Bcpu>`_
2020
- `Quick Start <getting_started.md>`_
2121
- `Examples <examples.md>`_
2222

docs/tutorials/llm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Verified for distributed inference mode via DeepSpeed
3030

3131
*Note*: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and customized linear kernels. We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.
3232

33-
Please check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm>`_ for instructions to install/setup environment and example scripts.
33+
Please check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm>`_ for instructions to install/setup environment and example scripts.
3434

3535
Module Level Optimization API for customized LLM (Prototype)
3636
------------------------------------------------------------

docs/tutorials/llm/llm_optimize.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,10 @@ This API currently supports for inference workloads of certain models.
99
API documentation is available at [API Docs page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html#ipex.llm.optimize),
1010
and supported model list can be found at [this page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/llm.html#ipexllm-optimized-model-list-for-inference).
1111

12-
For LLM fine-tuning, please check the [LLM fine-tuning tutorial](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm/fine-tuning).
13-
1412
## Pseudocode of Common Usage Scenarios
1513

1614
The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLM models.
17-
Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm/inference).
15+
Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm/inference).
1816

1917
### FP32/BF16
2018

@@ -59,7 +57,7 @@ model = ipex.llm.optimize(model, quantization_config=qconfig, low_precision_chec
5957

6058
Distributed inference can be performed with `DeepSpeed`. Based on original Intel® Extension for PyTorch\* scripts, the following code changes are required.
6159

62-
Check [LLM distributed inference examples](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm/inference/distributed) for complete codes.
60+
Check [LLM distributed inference examples](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.8/examples/cpu/llm/inference/distributed) for complete codes.
6361

6462
``` python
6563
import torch

examples/cpu/inference/cpp/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ We can have `libtorch` and `libintel-ext-pt` installed via the following command
1616
Download zip file of `libtorch` and decompress it:
1717

1818
```bash
19-
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.7.0%2Bcpu.zip
20-
unzip libtorch-cxx11-abi-shared-with-deps-2.7.0+cpu.zip
19+
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.8.0%2Bcpu.zip
20+
unzip libtorch-cxx11-abi-shared-with-deps-2.8.0+cpu.zip
2121
```
2222

2323
Download and execute `libintel-ext-pt` installation script:
2424

2525
```bash
26-
wget https://intel-extension-for-pytorch.s3.amazonaws.com/libipex/cpu/libintel-ext-pt-cxx11-abi-2.7.0%2Bcpu.run
27-
bash libintel-ext-pt-cxx11-abi-2.7.0+cpu.run install ./libtorch
26+
wget https://intel-extension-for-pytorch.s3.amazonaws.com/libipex/cpu/libintel-ext-pt-cxx11-abi-2.8.0%2Bcpu.run
27+
bash libintel-ext-pt-cxx11-abi-2.8.0+cpu.run install ./libtorch
2828
```
2929

3030
Please view the `cppsdk` part in [the installation guide](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu)

scripts/compile_bundle.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ def process(*args):
353353
if BASEDIR != SCRIPTDIR:
354354
assert (
355355
args.ver_ipex == ""
356-
), "Argument --ver-ipex cannot be set if you run the script from a exisiting source code directory."
356+
), "Argument --ver-ipex cannot be set if you run the script from a existing source code directory."
357357
else:
358358
assert (
359359
args.ver_ipex != ""

0 commit comments

Comments
 (0)