Skip to content

Commit 6e23534

Browse files
authored
format update (#3764)
* format update * add wheel package installation * llm env setup BKC for release version
1 parent a6acfed commit 6e23534

File tree

6 files changed

+55
-141
lines changed

6 files changed

+55
-141
lines changed

docs/design_doc/cpu/isa_dyndisp.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/tutorials/features/optimizer_fusion.md

Lines changed: 0 additions & 36 deletions
This file was deleted.

docs/tutorials/features/split_sgd.rst

Lines changed: 0 additions & 91 deletions
This file was deleted.

examples/cpu/inference/python/models/dlrm/dlrm_main.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1154,7 +1154,8 @@ def construct_model(args):
11541154
# the optimizer update will be applied in the backward pass, in this case through a fused op.
11551155
# TorchRec will use the FBGEMM implementation of EXACT_ADAGRAD.
11561156
# For GPU devices, a fused CUDA kernel is invoked. For CPU, FBGEMM_GPU invokes CPU kernels
1157-
# https://github.com/pytorch/FBGEMM/blob/2cb8b0dff3e67f9a009c4299defbd6b99cc12b8f/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L676-L678
1157+
# https://github.com/pytorch/FBGEMM/blob/2cb8b0dff3e67f9a009c4299defbd6b99cc12b8f
1158+
# /fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L676-L678
11581159

11591160
# Note that lr_decay, weight_decay and initial_accumulator_value for Adagrad optimizer in FBGEMM v0.3.2
11601161
# cannot be specified below. This equivalently means that all these parameters are hardcoded to zero.

examples/cpu/llm/README.md

Lines changed: 52 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,40 +7,84 @@ And a set of data types are supported for various scenarios, including FP32, BF1
77

88
# 2. Environment Setup
99

10-
**Note**: The instructions in this section will setup an environment with a recent PyTorch\* nightly build and **a latest source build of IPEX**.
11-
If you would like to use stable PyTorch\* and IPEX release versions, please refer to the instructions [in the release branch](https://github.com/intel/intel-extension-for-pytorch/blob/v2.7.0%2Bcpu/examples/cpu/llm/README.md#2-environment-setup),
12-
in which IPEX is installed via prebuilt wheels using `pip install` rather than source code building.
10+
## 2.1 [RECOMMENDED] Docker-based environment setup with pre-built wheels
1311

14-
## 2.1 [Recommended] Docker-based environment setup with compilation from source
12+
```bash
13+
# Get the Intel® Extension for PyTorch\* source code
14+
git clone https://github.com/intel/intel-extension-for-pytorch.git
15+
cd intel-extension-for-pytorch
16+
git checkout v2.8.0+cpu
17+
git submodule sync
18+
git submodule update --init --recursive
19+
20+
# Build an image with the provided Dockerfile by installing from Intel® Extension for PyTorch\* prebuilt wheel files
21+
# To have a custom ssh server port for multi-nodes run, please add --build-arg PORT_SSH=<CUSTOM_PORT> ex: 2345, otherwise use the default 22 SSH port
22+
DOCKER_BUILDKIT=1 docker build -f examples/cpu/llm/Dockerfile --build-arg PORT_SSH=2345 -t ipex-llm:2.8.0 .
23+
24+
# Run the container with command below
25+
docker run --rm -it --privileged -v /dev/shm:/dev/shm ipex-llm:2.8.0 bash
26+
27+
# When the command prompt shows inside the docker container, enter llm examples directory
28+
cd llm
29+
30+
# Activate environment variables
31+
source ./tools/env_activate.sh inference
32+
```
33+
34+
## 2.2 Conda-based environment setup with pre-built wheels
35+
36+
```bash
37+
# Get the Intel® Extension for PyTorch\* source code
38+
git clone https://github.com/intel/intel-extension-for-pytorch.git
39+
cd intel-extension-for-pytorch
40+
git checkout v2.8.0+cpu
41+
git submodule sync
42+
git submodule update --init --recursive
43+
44+
# GCC 12.3 is required. Installation can be taken care of by the environment configuration script.
45+
# Create a conda environment
46+
conda create -n llm python=3.10 -y
47+
conda activate llm
48+
49+
# Setup the environment with the provided script
50+
cd examples/cpu/llm
51+
bash ./tools/env_setup.sh 7
52+
53+
# Activate environment variables
54+
source ./tools/env_activate.sh inference
55+
```
56+
57+
## 2.3 Docker-based environment setup with compilation from source
1558

1659
```bash
1760
# Get the Intel® Extension for PyTorch\* source code
1861
git clone https://github.com/intel/intel-extension-for-pytorch.git
1962
cd intel-extension-for-pytorch
63+
git checkout v2.8.0+cpu
2064
git submodule sync
2165
git submodule update --init --recursive
2266

2367
# Build an image with the provided Dockerfile by compiling Intel® Extension for PyTorch\* from source
2468
# To have a custom ssh server port for multi-nodes run, please add --build-arg PORT_SSH=<CUSTOM_PORT> ex: 2345, otherwise use the default 22 SSH port
25-
docker build -f examples/cpu/llm/Dockerfile --build-arg COMPILE=ON --build-arg PORT_SSH=2345 -t ipex-llm:main .
69+
docker build -f examples/cpu/llm/Dockerfile --build-arg COMPILE=ON --build-arg PORT_SSH=2345 -t ipex-llm:2.8.0 .
2670

2771
# Run the container with command below
28-
docker run --rm -it --net host --privileged -v /dev/shm:/dev/shm ipex-llm:main bash
72+
docker run --rm -it --net host --privileged -v /dev/shm:/dev/shm ipex-llm:2.8.0 bash
2973

3074
# When the command prompt shows inside the docker container, enter llm examples directory
3175
cd llm
3276

3377
# Activate environment variables
34-
# set bash script argument to "inference" or "fine-tuning" for different usages
3578
source ./tools/env_activate.sh inference
3679
```
3780

38-
## 2.2 Conda-based environment setup with compilation from source
81+
## 2.4 Conda-based environment setup with compilation from source
3982

4083
```bash
4184
# Get the Intel® Extension for PyTorch\* source code
4285
git clone https://github.com/intel/intel-extension-for-pytorch.git
4386
cd intel-extension-for-pytorch
87+
git checkout v2.8.0+cpu
4488
git submodule sync
4589
git submodule update --init --recursive
4690

@@ -54,7 +98,6 @@ cd examples/cpu/llm
5498
bash ./tools/env_setup.sh 3
5599

56100
# Activate environment variables
57-
# set bash script argument to "inference" or "fine-tuning" for different usages
58101
source ./tools/env_activate.sh inference
59102
```
60103

examples/cpu/llm/tools/env_setup.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ if [ $((${MODE} & 0x02)) -ne 0 ]; then
6969
mkdir ${WHEELFOLDER}
7070

7171
# Install deps
72-
python -m pip install cmake==3.28.4 ninja
72+
python -m pip install cmake==3.28.4 ninja wheel
7373

7474
echo "#!/bin/bash" > ${AUX_INSTALL_SCRIPT}
7575
if [ $((${MODE} & 0x04)) -ne 0 ]; then

0 commit comments

Comments
 (0)