Skip to content

Commit ce74f8e

Browse files
ynimmagacavusmustafaalexsu52daniil-lyakhovkimishpatel
authoredMar 28, 2025··
Openvino backend for Executorch to enable inference on Intel CPUs, GPUs, NPUs (#8573)
### Summary This PR introduces support for the OpenVINO backend in Executorch, enabling accelerated inference on Intel hardware, including CPU, GPU, and NPU devices. OpenVINO optimizes deep learning model performance by leveraging hardware-specific enhancements. The PR also introduces the OpenVINO quantizer with NNCF (Neural Network Compression Framework) for model optimization. The functionality has been tested on several torchvision and timm models, with plans to test and enable support for additional model types in the future. Below is a description of the features: - OpenVINO Backend Integration: The backends/openvino directory includes build scripts, AOT components (partitioner, preprocesser), OpenVINO Quantizer, and runtime backend files that register the OpenVINO backend, manage OpenVINO’s inference engine interactions, including model execution, device-specific optimizations, and backend initialization. It also contains tests for layers and models. See backends/openvino/README.md for usage. - OpenVINO Examples: The examples/openvino directory provides scripts for AOT optimization, quantization, and C++ executor examples. It includes instructions for optimizing the models, quantizing them, and exporting Executorch programs with OpenVINO optimizations. Refer to examples/openvino/README.md for details. - E2E Tutorial: Added an end-to-end tutorial in docs/source/build-run-openvino.md. ### Test plan This PR is tested with OpenVINO backend on Intel Core Ultra 7 processors for CPU, GPU, and NPU devices. To run the layer tests and model tests, please refer to backends/openvino/tests/README.md cc: @yury-gorbachev @alexsu52 @cavusmustafa @daniil-lyakhov @suryasidd @AlexKoff88 @MaximProshin @AlexanderDokuchaev --------- Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com> Co-authored-by: Aleksandr Suslov <alexander.suslov@intel.com> Co-authored-by: dlyakhov <daniil.lyakhov@intel.com> Co-authored-by: Kimish Patel <kimishpatel@fb.com> Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
1 parent 6bc3e34 commit ce74f8e

37 files changed

+2906
-1
lines changed
 

‎.ci/scripts/setup-openvino.sh

+28
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
#!/bin/bash
2+
# Copyright (c) Meta Platforms, Inc. and affiliates.
3+
# All rights reserved.
4+
#
5+
# This source code is licensed under the BSD-style license found in the
6+
# LICENSE file in the root directory of this source tree.
7+
8+
set -ex
9+
10+
# shellcheck source=/dev/null
11+
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"
12+
13+
git clone https://github.com/openvinotoolkit/openvino.git
14+
cd openvino && git checkout releases/2025/1
15+
git submodule update --init --recursive
16+
sudo ./install_build_dependencies.sh
17+
mkdir build && cd build
18+
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
19+
make -j$(nproc)
20+
21+
cd ..
22+
cmake --install build --prefix dist
23+
24+
source dist/setupvars.sh
25+
cd ../backends/openvino
26+
pip install -r requirements.txt
27+
cd scripts
28+
./openvino_build.sh --enable_python

‎.ci/scripts/test_openvino.sh

+16
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
#!/bin/bash
2+
# Copyright (c) Meta Platforms, Inc. and affiliates.
3+
# All rights reserved.
4+
#
5+
# This source code is licensed under the BSD-style license found in the
6+
# LICENSE file in the root directory of this source tree.
7+
8+
set -ex
9+
10+
# shellcheck source=/dev/null
11+
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"
12+
13+
source openvino/dist/setupvars.sh
14+
cd backends/openvino/tests
15+
python test_runner.py --test_type ops
16+
python test_runner.py --test_type models

‎.github/workflows/pull.yml

+22
Original file line numberDiff line numberDiff line change
@@ -736,3 +736,25 @@ jobs:
736736
conda activate "${CONDA_ENV}"
737737
738738
# placeholder for mediatek to add more tests
739+
740+
test-openvino-linux:
741+
name: test-openvino-linux
742+
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
743+
permissions:
744+
id-token: write
745+
contents: read
746+
strategy:
747+
fail-fast: false
748+
with:
749+
runner: linux.2xlarge
750+
docker-image: executorch-ubuntu-22.04-gcc9
751+
submodules: 'true'
752+
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
753+
timeout: 90
754+
script: |
755+
# The generic Linux job chooses to use base env, not the one setup by the image
756+
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
757+
conda activate "${CONDA_ENV}"
758+
759+
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-openvino.sh
760+
PYTHON_EXECUTABLE=python bash .ci/scripts/test_openvino.sh

‎.lintrunner.toml

+2
Original file line numberDiff line numberDiff line change
@@ -299,12 +299,14 @@ include_patterns = [
299299
# TODO(https://github.com/pytorch/executorch/issues/7441): Gradually start enabling all folders.
300300
# 'backends/**/*.py',
301301
'backends/arm/**/*.py',
302+
'backends/openvino/**/*.py',
302303
'build/**/*.py',
303304
'codegen/**/*.py',
304305
# 'devtools/**/*.py',
305306
'devtools/visualization/**/*.py',
306307
'docs/**/*.py',
307308
# 'examples/**/*.py',
309+
'examples/openvino/**/*.py',
308310
# 'exir/**/*.py',
309311
# 'extension/**/*.py',
310312
'kernels/**/*.py',

‎CMakeLists.txt

+10
Original file line numberDiff line numberDiff line change
@@ -204,6 +204,8 @@ option(EXECUTORCH_BUILD_MPS "Build the MPS backend" OFF)
204204

205205
option(EXECUTORCH_BUILD_NEURON "Build the backends/mediatek directory" OFF)
206206

207+
option(EXECUTORCH_BUILD_OPENVINO "Build the Openvino backend" OFF)
208+
207209
option(EXECUTORCH_BUILD_PYBIND "Build the Python Bindings" OFF)
208210

209211
option(EXECUTORCH_BUILD_QNN "Build the Qualcomm backend" OFF)
@@ -715,6 +717,10 @@ if(EXECUTORCH_BUILD_NEURON)
715717
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/backends/mediatek)
716718
endif()
717719

720+
if(EXECUTORCH_BUILD_OPENVINO)
721+
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/backends/openvino)
722+
endif()
723+
718724
if(EXECUTORCH_BUILD_QNN)
719725
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/backends/qualcomm)
720726
endif()
@@ -817,6 +823,10 @@ if(EXECUTORCH_BUILD_PYBIND)
817823
list(APPEND _dep_libs mpsdelegate)
818824
endif()
819825

826+
if(EXECUTORCH_BUILD_OPENVINO)
827+
list(APPEND _dep_libs openvino_backend)
828+
endif()
829+
820830
if(EXECUTORCH_BUILD_XNNPACK)
821831
# need to explicitly specify XNNPACK and microkernels-prod
822832
# here otherwise uses XNNPACK and microkernel-prod symbols from libtorch_cpu

‎README.md

+1
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ Platform Support:
2929
- Arm
3030
- Cadence
3131
- MediaTek
32+
- OpenVINO
3233
- Qualcomm
3334
- Vulkan
3435
- XNNPACK

‎backends/openvino/CMakeLists.txt

+75
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
# Copyright (c) Intel Corporation
2+
#
3+
# Licensed under the BSD License (the "License"); you may not use this file
4+
# except in compliance with the License. See the license file found in the
5+
# LICENSE file in the root directory of this source tree.
6+
7+
# Set minimum required CMake version
8+
cmake_minimum_required(VERSION 3.19)
9+
10+
# Set project name
11+
project(openvino_backend_project)
12+
13+
# Set C++ standard
14+
set(CMAKE_CXX_STANDARD 17)
15+
set(CMAKE_CXX_STANDARD_REQUIRED ON)
16+
17+
# Ensure compile_commands.json is generated
18+
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
19+
20+
# Set up EXECUTORCH_ROOT if not already set
21+
if(NOT EXECUTORCH_ROOT)
22+
set(EXECUTORCH_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/../..)
23+
endif()
24+
25+
# Define common include directories
26+
set(COMMON_INCLUDE_DIRS ${EXECUTORCH_ROOT}/..)
27+
28+
# Include utility CMake scripts from ExecuteTorch
29+
include(${EXECUTORCH_ROOT}/tools/cmake/Utils.cmake)
30+
31+
# Find OpenVINO libraries
32+
find_package(OpenVINO REQUIRED)
33+
34+
# Define OpenVINO backend as a static library
35+
add_library(openvino_backend STATIC .)
36+
37+
# Enable exceptions and RTTI for OpenVINO backend
38+
target_compile_options(openvino_backend PRIVATE -frtti -fexceptions)
39+
40+
# Include Executorch directories
41+
target_include_directories(openvino_backend PUBLIC ${COMMON_INCLUDE_DIRS})
42+
43+
# Link OpenVINO and ExecuteTorch core libraries
44+
target_link_libraries(openvino_backend PRIVATE openvino::runtime executorch_core)
45+
46+
# Add source files for OpenVINO backend
47+
target_sources(openvino_backend PRIVATE ${CMAKE_CURRENT_LIST_DIR}/runtime/OpenvinoBackend.cpp)
48+
49+
target_link_options_shared_lib(openvino_backend)
50+
51+
if(EXECUTORCH_BUILD_OPENVINO_EXECUTOR_RUNNER)
52+
# Build executor runner binary for openvino backend
53+
list(APPEND openvino_executor_runner_libs openvino_backend executorch)
54+
55+
set(_openvino_executor_runner__srcs
56+
${EXECUTORCH_ROOT}/examples/portable/executor_runner/executor_runner.cpp
57+
${EXECUTORCH_ROOT}/extension/data_loader/file_data_loader.cpp
58+
${EXECUTORCH_ROOT}/extension/evalue_util/print_evalue.cpp
59+
${EXECUTORCH_ROOT}/extension/runner_util/inputs.cpp
60+
${EXECUTORCH_ROOT}/extension/runner_util/inputs_portable.cpp
61+
)
62+
add_executable(openvino_executor_runner ${_openvino_executor_runner__srcs})
63+
64+
list(APPEND openvino_executor_runner_libs)
65+
66+
target_link_libraries(
67+
openvino_executor_runner gflags portable_ops_lib ${openvino_executor_runner_libs}
68+
)
69+
target_compile_options(openvino_executor_runner PUBLIC ${_common_compile_options})
70+
endif()
71+
72+
73+
74+
# Install OpenVINO backend library to the lib directory
75+
install(TARGETS openvino_backend DESTINATION lib)

‎backends/openvino/README.md

+89
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
# OpenVINO Backend for ExecuTorch
2+
The OpenVINO backend enables optimized execution of deep learning models on Intel hardware, leveraging Intel's [OpenVINO toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) for inference acceleration.
3+
4+
## Supported Hardware
5+
6+
OpenVINO backend supports the following hardware:
7+
8+
- Intel CPUs
9+
- Intel integrated GPUs
10+
- Intel discrete GPUs
11+
- Intel NPUs
12+
13+
For more information on the supported hardware, please refer to [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) page.
14+
15+
## Directory Structure
16+
17+
```
18+
executorch
19+
├── backends
20+
│ └── openvino
21+
│ ├── runtime
22+
│ ├── OpenvinoBackend.cpp
23+
│ └── OpenvinoBackend.h
24+
│ ├── scripts
25+
│ └── openvino_build.sh
26+
│ ├── tests
27+
│ ├── CMakeLists.txt
28+
│ ├── README.md
29+
│ ├── __init__.py
30+
│ ├── partitioner.py
31+
│ ├── preprocess.py
32+
│ └── requirements.txt
33+
└── examples
34+
└── openvino
35+
├── aot_optimize_and_infer.py
36+
└── README.md
37+
```
38+
39+
## Build Instructions
40+
41+
### Prerequisites
42+
43+
Before you begin, ensure you have openvino installed and configured on your system:
44+
45+
```bash
46+
git clone https://github.com/openvinotoolkit/openvino.git
47+
cd openvino && git checkout releases/2025/1
48+
git submodule update --init --recursive
49+
sudo ./install_build_dependencies.sh
50+
mkdir build && cd build
51+
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
52+
make -j$(nproc)
53+
54+
cd ..
55+
cmake --install build --prefix <your_preferred_install_location>
56+
cd <your_preferred_install_location>
57+
source setupvars.sh
58+
```
59+
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon.
60+
For more information about OpenVINO build, refer to the [OpenVINO Build Instructions](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build_linux.md).
61+
62+
### Setup
63+
64+
Follow the steps below to setup your build environment:
65+
66+
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
67+
68+
2. **Setup OpenVINO Backend Environment**
69+
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
70+
```bash
71+
pip install -r requirements.txt
72+
```
73+
Note: To achieve optimal performance with NNCF quantization, you should install the latest development version of NNCF (version 2.16.0.dev0+191b53d9 or higher).
74+
3. Navigate to `scripts/` directory.
75+
76+
4. **Build OpenVINO Backend C++ Libraries and Executor Runner**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process. By default, OpenVINO backend will be built under `cmake-out/backends/openvino/` as `libopenvino_backend.a`
77+
78+
```bash
79+
./openvino_build.sh
80+
```
81+
**Build OpenVINO Backend Python Package with Pybindings**: To build and install the OpenVINO backend Python package with Python bindings, run the `openvino_build.sh` script with the `--enable_python` argument. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests and `export_and_infer_openvino.py` script inside `executorch/examples/openvino` folder.
82+
83+
```bash
84+
./openvino_build.sh --enable_python
85+
```
86+
87+
### Run
88+
89+
Please refer to [README.md](../../examples/openvino/README.md) for instructions on running examples of various of models with openvino backend.

‎backends/openvino/__init__.py

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
from .partitioner import OpenvinoPartitioner
2+
from .preprocess import OpenvinoBackend
3+
from .quantizer.quantizer import OpenVINOQuantizer
4+
5+
__all__ = ["OpenvinoBackend", "OpenvinoPartitioner", "OpenVINOQuantizer"]

0 commit comments

Comments
 (0)
Please sign in to comment.