You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Replace Executorch with ExecuTorch, Part 6/N (#471)
Summary:
Codemodding the rest. Adding a lintrunner to prevent further regressions
Pull Request resolved: #471
Test Plan: Run lintrunner, CI
Reviewed By: cccclai
Differential Revision: D49579923
Pulled By: mergennachin
fbshipit-source-id: 8ee5669080ea923d303ce959bf2c19925c5df6b0
Copy file name to clipboardExpand all lines: docs/website/docs/ir_spec/03_backend_dialect.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ To lower edge ops to backend ops, a pass will perform pattern matching to identi
23
23
*`transform()`. An API on `ExportProgram` that allows users to provide custom passes. Note that this is not guarded by any validator so the soundness of the program is not guaranteed.
24
24
*[`ExecutorchBackendConfig.passes`](https://github.com/pytorch/executorch/blob/main/exir/capture/_config.py#L40). If added here, the pass will be part of the lowering process from backend dialect to `ExecutorchProgram`.
25
25
26
-
Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by Executorch.
26
+
Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by ExecuTorch.
27
27
28
28
## API
29
29
@@ -38,7 +38,7 @@ Then the operator can be accessed/used from the passes. The `CompositeImplicitAu
38
38
2. Ensures the retracability of `ExportProgram`. Once retraced, the backend operator will be decomposed into the ATen ops used in the pattern.
39
39
40
40
## Op Set
41
-
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by Executorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.
41
+
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by ExecuTorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.
42
42
43
43
*`executorch_prims::add.int(SymInt a, SymInt b) -> SymInt`
with their native functions (or kernels, we use these two terms interchangeably)
10
-
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by Executorch have these traits (actually same for custom ops):
10
+
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by ExecuTorch have these traits (actually same for custom ops):
11
11
1. Out variant, means these ops take an `out` argument
12
12
2. Functional except `out`. These ops shouldn't mutate input tensors other than `out`, shouldn't create aliasing views.
13
13
14
14
To give an example, `aten::add_.Tensor` is not supported since it mutates an input tensor, `aten::add.out` is supported.
15
15
16
-
ATen mode is a build-time option to link ATen library into Executorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.
16
+
ATen mode is a build-time option to link ATen library into ExecuTorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.
17
17
18
18
On the other hand we need to provide our custom kernels if ATen mode is off (a.k.a. lean mode).
19
19
20
-
In the next section we will walk through the steps to register ATen-compliant ops into Executorch runtime.
20
+
In the next section we will walk through the steps to register ATen-compliant ops into ExecuTorch runtime.
21
21
22
22
## Step by step guide
23
23
There are two branches for this use case:
24
24
* ATen mode. In this case we expect the exported model to be able to run with ATen kernels .
25
25
* Lean mode. This requires ATen-compliant op implementations using `ETensor`.
26
26
27
-
In a nutshell, we need the following steps in order for a ATen-compliant op to work on Executorch:
27
+
In a nutshell, we need the following steps in order for a ATen-compliant op to work on ExecuTorch:
28
28
29
29
#### ATen mode:
30
30
1. Define a target for selective build (`et_operator_library` macro)
31
31
2. Pass this target to codegen using `executorch_generated_lib` macro
32
-
3. Hookup the generated lib into Executorch runtime.
32
+
3. Hookup the generated lib into ExecuTorch runtime.
33
33
34
34
For more details on how to use selective build, check [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build).
35
35
#### Lean mode:
36
36
1. Declare the op name in `functions.yaml`. Detail instruction can be found in [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).
37
-
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. Executorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
37
+
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. ExecuTorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
38
38
3. Specify the kernel namespace and function name in `functions.yaml` so codegen knows how to bind operator to its kernel.
39
-
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into Executorch runtime.
39
+
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into ExecuTorch runtime.
40
40
41
41
### Case Study
42
42
Let's say a model uses an ATen-compliant operator `aten::add.out`.
@@ -90,7 +90,7 @@ The corresponding `functions.yaml` for this operator looks like:
90
90
```
91
91
Notice that there are some caveats:
92
92
#### Caveats
93
-
*`dispatch` and `CPU` are legacy fields and they don't mean anything in Executorch context.
93
+
*`dispatch` and `CPU` are legacy fields and they don't mean anything in ExecuTorch context.
94
94
* Namespace `aten` is omitted.
95
95
* We don't need to write `aten::add.out` function schema because we will use the schema definition in `native_functions.yaml` as our source of truth.
96
96
* Kernel namespace in the yaml file is `custom` instead of `custom::native`. This is because codegen will append a `native` namespace automatically. It also means the kernel always needs to be defined under `<name>::native`.
@@ -121,9 +121,9 @@ executorch_generated_lib(
121
121
)
122
122
```
123
123
### Usage of generated lib
124
-
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into Executorch runtime.
124
+
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into ExecuTorch runtime.
125
125
126
-
In our Executorch binary target, add `add_lib` as a dependency:
126
+
In our ExecuTorch binary target, add `add_lib` as a dependency:
127
127
```python
128
128
cxx_binary(
129
129
name="executorch_bin",
@@ -138,15 +138,15 @@ cxx_binary(
138
138
To facilitate custom operator registration, we provide the following APIs:
139
139
140
140
-`functions.yaml`: ATen-compliant operator schema and kernel metadata are defined in this file.
141
-
-`executorch_generated_lib`: the Buck rule to call Executorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
142
-
-`<name>`: contains C++ source files to register ATen-compliant operators. Required by Executorch runtime.
141
+
-`executorch_generated_lib`: the Buck rule to call ExecuTorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
142
+
-`<name>`: contains C++ source files to register ATen-compliant operators. Required by ExecuTorch runtime.
143
143
- Input: most of the input fields are self-explainatory.
144
144
-`deps`: kernel libraries - can be custom kernels or portable kernels (see portable kernel library [README.md](https://fburl.com/code/zlgs6zzf) on how to add more kernels) - needs to be provided. Selective build related targets should also be passed into the generated libraries through `deps`.
145
145
-`define_static_targets`: if true we will generate a `<name>_static` library with static linkage. See docstring for more information.
146
146
-`functions_yaml_target`: the target pointing to `functions.yaml`. See `ATen-compliant Operator Registration` section for more details.
147
147
148
148
149
-
We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into Executorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.
149
+
We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into ExecuTorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.
150
150
151
151
152
152
@@ -162,7 +162,7 @@ Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870826409]Execu
162
162
Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870830000]Executor.cpp:267 In function init(), assert failed (num_missing_ops == 0): There are 1 operators missing from registration to Executor. See logs for details
163
163
```
164
164
165
-
This error message indicates that the operators are not registered into the Executorch runtime.
165
+
This error message indicates that the operators are not registered into the ExecuTorch runtime.
166
166
167
167
For lean mode mode, please make sure the ATen-compliant operator schema is being added to your `functions.yaml`. For more guidance of how to write a `functions.yaml` file, please refer to [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).
0 commit comments