Skip to content

Commit a89017d

Browse files
mergennachinfacebook-github-bot
authored andcommitted
Replace Executorch with ExecuTorch, Part 6/N (#471)
Summary: Codemodding the rest. Adding a lintrunner to prevent further regressions Pull Request resolved: #471 Test Plan: Run lintrunner, CI Reviewed By: cccclai Differential Revision: D49579923 Pulled By: mergennachin fbshipit-source-id: 8ee5669080ea923d303ce959bf2c19925c5df6b0
1 parent c857a54 commit a89017d

File tree

24 files changed

+134
-103
lines changed

24 files changed

+134
-103
lines changed

.ci/docker/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
# Docker images for Executorch CI
1+
# Docker images for ExecuTorch CI
22

33
This directory contains everything needed to build the Docker images
4-
that are used in Executorch CI. The content of this directory are copied
4+
that are used in ExecuTorch CI. The content of this directory are copied
55
from PyTorch CI https://github.com/pytorch/pytorch/tree/main/.ci/docker.
66
It also uses the same directory structure as PyTorch.
77

.lintrunner.toml

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,3 +122,34 @@ init_command = [
122122
'--dry-run={{DRYRUN}}',
123123
'--requirement=requirements-lintrunner.txt',
124124
]
125+
126+
[[linter]]
127+
code = 'ETCAPITAL'
128+
include_patterns = [
129+
'**/*.py',
130+
'**/*.pyi',
131+
'**/*.h',
132+
'**/*.cpp',
133+
'**/*.md',
134+
'**/*.rst',
135+
]
136+
exclude_patterns = [
137+
'third-party/**',
138+
'**/third-party/**',
139+
]
140+
command = [
141+
'python',
142+
'-m',
143+
'lintrunner_adapters',
144+
'run',
145+
'grep_linter',
146+
'--pattern= Executorch\W+',
147+
'--linter-name=ExecuTorchCapitalization',
148+
'--error-name=Incorrect capitalization for ExecuTorch',
149+
"""--error-description=
150+
Please use ExecuTorch with capital T for consistency.
151+
https://fburl.com/workplace/nsx6hib2
152+
""",
153+
'--',
154+
'@{{PATHSFILE}}',
155+
]

docs/website/docs/ir_spec/03_backend_dialect.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ To lower edge ops to backend ops, a pass will perform pattern matching to identi
2323
* `transform()`. An API on `ExportProgram` that allows users to provide custom passes. Note that this is not guarded by any validator so the soundness of the program is not guaranteed.
2424
* [`ExecutorchBackendConfig.passes`](https://github.com/pytorch/executorch/blob/main/exir/capture/_config.py#L40). If added here, the pass will be part of the lowering process from backend dialect to `ExecutorchProgram`.
2525

26-
Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by Executorch.
26+
Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by ExecuTorch.
2727

2828
## API
2929

@@ -38,7 +38,7 @@ Then the operator can be accessed/used from the passes. The `CompositeImplicitAu
3838
2. Ensures the retracability of `ExportProgram`. Once retraced, the backend operator will be decomposed into the ATen ops used in the pattern.
3939

4040
## Op Set
41-
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by Executorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.
41+
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by ExecuTorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.
4242

4343
* `executorch_prims::add.int(SymInt a, SymInt b) -> SymInt`
4444
* pattern: builtin.add

docs/website/docs/tutorials/00_setting_up_executorch.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Setting up Executorch
1+
# Setting up ExecuTorch
22

3-
This is a tutorial for building and installing Executorch from the GitHub repository.
3+
This is a tutorial for building and installing ExecuTorch from the GitHub repository.
44

55
## AOT Setup [(Open on Google Colab)](https://colab.research.google.com/drive/1m8iU4y7CRVelnnolK3ThS2l2gBo7QnAP#scrollTo=1o2t3LlYJQY5)
66

@@ -125,4 +125,4 @@ or execute the binary directly from the `--show-output` path shown when building
125125
## More Examples
126126

127127
The [`executorch/examples`](https://github.com/pytorch/executorch/blob/main/examples) directory contains useful examples with a guide to lower and run
128-
popular models like MobileNet V3, Torchvision ViT, Wav2Letter, etc. on Executorch.
128+
popular models like MobileNet V3, Torchvision ViT, Wav2Letter, etc. on ExecuTorch.

docs/website/docs/tutorials/aten_ops_and_aten_mode.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,40 +3,40 @@
33

44
## Introduction
55

6-
Executorch supports a subset of ATen-compliant operators.
6+
ExecuTorch supports a subset of ATen-compliant operators.
77
ATen-compliant operators are those defined in
88
[`native_functions.yaml`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml),
99
with their native functions (or kernels, we use these two terms interchangeably)
10-
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by Executorch have these traits (actually same for custom ops):
10+
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by ExecuTorch have these traits (actually same for custom ops):
1111
1. Out variant, means these ops take an `out` argument
1212
2. Functional except `out`. These ops shouldn't mutate input tensors other than `out`, shouldn't create aliasing views.
1313

1414
To give an example, `aten::add_.Tensor` is not supported since it mutates an input tensor, `aten::add.out` is supported.
1515

16-
ATen mode is a build-time option to link ATen library into Executorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.
16+
ATen mode is a build-time option to link ATen library into ExecuTorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.
1717

1818
On the other hand we need to provide our custom kernels if ATen mode is off (a.k.a. lean mode).
1919

20-
In the next section we will walk through the steps to register ATen-compliant ops into Executorch runtime.
20+
In the next section we will walk through the steps to register ATen-compliant ops into ExecuTorch runtime.
2121

2222
## Step by step guide
2323
There are two branches for this use case:
2424
* ATen mode. In this case we expect the exported model to be able to run with ATen kernels .
2525
* Lean mode. This requires ATen-compliant op implementations using `ETensor`.
2626

27-
In a nutshell, we need the following steps in order for a ATen-compliant op to work on Executorch:
27+
In a nutshell, we need the following steps in order for a ATen-compliant op to work on ExecuTorch:
2828

2929
#### ATen mode:
3030
1. Define a target for selective build (`et_operator_library` macro)
3131
2. Pass this target to codegen using `executorch_generated_lib` macro
32-
3. Hookup the generated lib into Executorch runtime.
32+
3. Hookup the generated lib into ExecuTorch runtime.
3333

3434
For more details on how to use selective build, check [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build).
3535
#### Lean mode:
3636
1. Declare the op name in `functions.yaml`. Detail instruction can be found in [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).
37-
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. Executorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
37+
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. ExecuTorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
3838
3. Specify the kernel namespace and function name in `functions.yaml` so codegen knows how to bind operator to its kernel.
39-
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into Executorch runtime.
39+
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into ExecuTorch runtime.
4040

4141
### Case Study
4242
Let's say a model uses an ATen-compliant operator `aten::add.out`.
@@ -90,7 +90,7 @@ The corresponding `functions.yaml` for this operator looks like:
9090
```
9191
Notice that there are some caveats:
9292
#### Caveats
93-
* `dispatch` and `CPU` are legacy fields and they don't mean anything in Executorch context.
93+
* `dispatch` and `CPU` are legacy fields and they don't mean anything in ExecuTorch context.
9494
* Namespace `aten` is omitted.
9595
* We don't need to write `aten::add.out` function schema because we will use the schema definition in `native_functions.yaml` as our source of truth.
9696
* Kernel namespace in the yaml file is `custom` instead of `custom::native`. This is because codegen will append a `native` namespace automatically. It also means the kernel always needs to be defined under `<name>::native`.
@@ -121,9 +121,9 @@ executorch_generated_lib(
121121
)
122122
```
123123
### Usage of generated lib
124-
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into Executorch runtime.
124+
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into ExecuTorch runtime.
125125

126-
In our Executorch binary target, add `add_lib` as a dependency:
126+
In our ExecuTorch binary target, add `add_lib` as a dependency:
127127
```python
128128
cxx_binary(
129129
name = "executorch_bin",
@@ -138,15 +138,15 @@ cxx_binary(
138138
To facilitate custom operator registration, we provide the following APIs:
139139

140140
- `functions.yaml`: ATen-compliant operator schema and kernel metadata are defined in this file.
141-
- `executorch_generated_lib`: the Buck rule to call Executorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
142-
- `<name>`: contains C++ source files to register ATen-compliant operators. Required by Executorch runtime.
141+
- `executorch_generated_lib`: the Buck rule to call ExecuTorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
142+
- `<name>`: contains C++ source files to register ATen-compliant operators. Required by ExecuTorch runtime.
143143
- Input: most of the input fields are self-explainatory.
144144
- `deps`: kernel libraries - can be custom kernels or portable kernels (see portable kernel library [README.md](https://fburl.com/code/zlgs6zzf) on how to add more kernels) - needs to be provided. Selective build related targets should also be passed into the generated libraries through `deps`.
145145
- `define_static_targets`: if true we will generate a `<name>_static` library with static linkage. See docstring for more information.
146146
- `functions_yaml_target`: the target pointing to `functions.yaml`. See `ATen-compliant Operator Registration` section for more details.
147147

148148

149-
We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into Executorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.
149+
We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into ExecuTorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.
150150

151151

152152

@@ -162,7 +162,7 @@ Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870826409]Execu
162162
Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870830000]Executor.cpp:267 In function init(), assert failed (num_missing_ops == 0): There are 1 operators missing from registration to Executor. See logs for details
163163
```
164164

165-
This error message indicates that the operators are not registered into the Executorch runtime.
165+
This error message indicates that the operators are not registered into the ExecuTorch runtime.
166166

167167
For lean mode mode, please make sure the ATen-compliant operator schema is being added to your `functions.yaml`. For more guidance of how to write a `functions.yaml` file, please refer to [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).
168168

docs/website/docs/tutorials/backend_delegate.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -76,16 +76,16 @@ __ET_NODISCARD Error register_backend(const Backend& backend);
7676
```
7777
7878
79-
# How to delegate a PyTorch module to a different backend in Executorch for Model Authors
79+
# How to delegate a PyTorch module to a different backend in ExecuTorch for Model Authors
8080
8181
This note is to demonstrate the basic end-to-end flow of backend delegation in
82-
the Executorch runtime.
82+
the ExecuTorch runtime.
8383
8484
At a high level, here are the steps needed for delegation:
8585
86-
1. Add your backend to Executorch.
86+
1. Add your backend to ExecuTorch.
8787
2. Frontend: lower the PyTorch module or part of the module to a backend.
88-
3. Deployment: load and run the lowered module through Executorch runtime
88+
3. Deployment: load and run the lowered module through ExecuTorch runtime
8989
interface.
9090
9191
@@ -247,7 +247,7 @@ with open(save_path, "wb") as f:
247247

248248
## Runtime
249249

250-
The serialized flatbuffer model is loaded by the Executorch runtime. The
250+
The serialized flatbuffer model is loaded by the ExecuTorch runtime. The
251251
preprocessed blob is directly stored in the flatbuffer, which is loaded into a
252252
call to the backend's `init()` function during model initialization stage. At
253253
the model execution stage, the initialized handled can be executed through the

docs/website/docs/tutorials/bundled_program.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,16 +19,16 @@ We need the pointer to executorch program to do the execution. To unify the proc
1919
```c++
2020

2121
/**
22-
* Finds the serialized Executorch program data in the provided file data.
22+
* Finds the serialized ExecuTorch program data in the provided file data.
2323
*
2424
* The returned buffer is appropriate for constructing a
2525
* torch::executor::Program.
2626
*
2727
* Calling this is only necessary if the file could be a bundled program. If the
28-
* file will only contain an unwrapped Executorch program, callers can construct
28+
* file will only contain an unwrapped ExecuTorch program, callers can construct
2929
* torch::executor::Program with file_data directly.
3030
*
31-
* @param[in] file_data The contents of an Executorch program or bundled program
31+
* @param[in] file_data The contents of an ExecuTorch program or bundled program
3232
* file.
3333
* @param[in] file_data_len The length of file_data, in bytes.
3434
* @param[out] out_program_data The serialized Program data, if found.

docs/website/docs/tutorials/cmake_build_system.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ useful to embedded systems users.
3030
## One-time setup
3131

3232
1. Clone the repo and install buck2 as described in the "Runtime Setup" section
33-
of [Setting up Executorch](00_setting_up_executorch.md#runtime-setup)
33+
of [Setting up ExecuTorch](00_setting_up_executorch.md#runtime-setup)
3434
- `buck2` is necessary because the CMake build system runs `buck2` commands
3535
to extract source lists from the primary build system. It will be possible
3636
to configure the CMake system to avoid calling `buck2`, though.
@@ -40,7 +40,7 @@ useful to embedded systems users.
4040
calls to extract source lists from `buck2`. Consider doing this `pip
4141
install` inside your conda environment if you created one during AOT Setup
4242
(see [Setting up
43-
Executorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab)).
43+
ExecuTorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab)).
4444
1. Install CMake version 3.19 or later
4545

4646
## Configure the CMake build
@@ -84,7 +84,7 @@ cmake --build cmake-out -j9
8484

8585
First, generate an `add.pte` or other ExecuTorch program file using the
8686
instructions in the "AOT Setup" section of
87-
[Setting up Executorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab).
87+
[Setting up ExecuTorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab).
8888

8989
Then, pass it to the commandline tool:
9090

0 commit comments

Comments
 (0)