You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: backends/xnnpack/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ After lowering to the XNNPACK Program, we can then prepare it for executorch and
92
92
93
93
94
94
### Running the XNNPACK Model with CMake
95
-
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
95
+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the `executor_runner`, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
96
96
```bash
97
97
# cd to the root of executorch repo
98
98
cd executorch
@@ -119,9 +119,9 @@ Then you can build the runtime componenets with
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
122
+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
Copy file name to clipboardexpand all lines: docs/source/backend-delegates-xnnpack-reference.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre
70
70
When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors.
71
71
72
72
#### **Profiling**
73
-
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](./tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `xnn_executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
73
+
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](./tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
74
74
75
75
76
76
[comment]: <>(TODO: Refactor quantizer to a more official quantization doc)
Copy file name to clipboardexpand all lines: docs/source/tutorial-xnnpack-delegate-lowering.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -141,7 +141,7 @@ Note in the example above,
141
141
The generated model file will be named `[model_name]_xnnpack_[qs8/fp32].pte` depending on the arguments supplied.
142
142
143
143
## Running the XNNPACK Model with CMake
144
-
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
144
+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the `executor_runner`, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
145
145
```bash
146
146
# cd to the root of executorch repo
147
147
cd executorch
@@ -168,15 +168,15 @@ Then you can build the runtime componenets with
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
171
+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.
180
180
181
181
## Profiling
182
-
To enable profiling in the `xnn_executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `xnn_executor_runner --help` for details).
182
+
To enable profiling in the `executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `executor_runner --help` for details).
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `xnn_executor_runner`. With cmake, you first configure your cmake with the following:
27
+
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `executor_runner`. With cmake, you first configure your cmake with the following:
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
83
+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the `executor_runner`, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
84
84
```bash
85
85
# cd to the root of executorch repo
86
86
cd executorch
@@ -107,9 +107,9 @@ Then you can build the runtime componenets with
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
110
+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
0 commit comments