Skip to content

Commit 9878e4f

Browse files
authored
update docs for FastBert (#1496)
1 parent 99a70a6 commit 9878e4f

File tree

3 files changed

+4
-6
lines changed

3 files changed

+4
-6
lines changed

docs/tutorials/features.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -201,9 +201,7 @@ For more detailed information, check `HyperTune <features/hypertune.md>`_.
201201
Fast BERT Optimization (Experimental, *NEW feature from 2.0.0*)
202202
---------------------------------------------------------------
203203

204-
Intel proposed a technique, Tensor Processing Primitives (TPP), a programming abstraction striving for efficient, portable implementation of DL workloads with high-productivity. TPPs define a compact, yet versatile set of 2D-tensor operators (or a virtual Tensor ISA), which subsequently can be utilized as building-blocks to construct complex operators on high-dimensional tensors.
205-
206-
Implementation of TPP is integrated into Intel® Extension for PyTorch\*. BERT could benefit from this new technique. An API `ipex.fast_bert` is provided for a simple usage.
204+
Intel proposed a technique to speed up BERT workloads. Implementation is integrated into Intel® Extension for PyTorch\*. An API `ipex.fast_bert` is provided for a simple usage.
207205

208206
For more detailed information, check `Fast BERT <features/fast_bert.md>`_.
209207

docs/tutorials/features/fast_bert.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ Fast BERT (Experimental)
33

44
### Feature Description
55

6-
Intel proposed a technique, Tensor Processing Primitives (TPP), a programming abstraction striving for efficient, portable implementation of DL workloads with high-productivity. TPPs define a compact, yet versatile set of 2D-tensor operators (or a virtual Tensor ISA), which subsequently can be utilized as building-blocks to construct complex operators on high-dimensional tensors. Detailed contents are available at [*Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning & HPC Workloads*](https://arxiv.org/pdf/2104.05755.pdf).
6+
Intel proposed a technique to speed up BERT workloads. Implementation leverages the idea from [*Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning & HPC Workloads*](https://arxiv.org/pdf/2104.05755.pdf).
77

8-
Implementation of TPP is integrated into Intel® Extension for PyTorch\*. BERT could benefit from this new technique, for both training and inference.
8+
The Implementation is integrated into Intel® Extension for PyTorch\*. BERT could benefit from this new technique, for both training and inference.
99

1010
### Prerequisite
1111

docs/tutorials/releases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ We are pleased to announce the release of Intel® Extension for PyTorch\* 2.0.0-
77

88
### Highlights
99

10-
- **Fast BERT optimization (Experimental)**: Intel introduced a new technique, Tensor Processing Primitives (TPP), a programming abstraction striving for efficient, portable implementation of DL workloads with high-productivity. Intel® Extension for PyTorch\* integrated this TPP implementation, which benefits BERT model especially training. A new API `ipex.fast_bert` is provided to try this new optimization. More detailed information can be found at [Fast Bert Feature](./features/fast_bert.md).
10+
- **Fast BERT optimization (Experimental)**: Intel introduced a new technique to speed up BERT workloads. Intel® Extension for PyTorch\* integrated this implementation, which benefits BERT model especially training. A new API `ipex.fast_bert` is provided to try this new optimization. More detailed information can be found at [Fast Bert Feature](./features/fast_bert.md).
1111

1212
- **Work with torch.compile as an backend (Experimental)**: PyTorch 2.0 introduces a new feature, `torch.compile`, to speed up PyTorch execution. We've enabled Intel® Extension for PyTorch as a backend of torch.compile, which can leverage this new PyTorch API's power of graph capture and provide additional optimization based on these graphs.
1313
The usage of this new feature is quite simple as below:

0 commit comments

Comments
 (0)