Skip to content

Commit 23bdde3

Browse files
committed
Update to Version 1.0.0
1 parent 646cb69 commit 23bdde3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+3658
-363
lines changed

README.md

Lines changed: 138 additions & 149 deletions
Original file line numberDiff line numberDiff line change
@@ -1,149 +1,138 @@
1-
# Extension package of TVM Deep Learning Complier for Renesas DRP-AI accelerators powered by EdgeCortix MERA™
2-
3-
[TVM Documentation](https://tvm.apache.org/docs) |
4-
[TVM Community](https://tvm.apache.org/community) |
5-
[TVM github](https://github.com/apache/tvm) |
6-
7-
8-
DRP-AI TVM[^1] is Machine Learning Compiler plugin for [Apache TVM](https://github.com/apache/tvm/) provided by Renesas Electronics Corporation.
9-
10-
## License
11-
(C) Copyright EdgeCortix, Inc. 2022
12-
(C) Copyright Renesas Electronics Corporation 2022
13-
Contributors Licensed under an Apache-2.0 license.
14-
15-
## Supported Embedded Platforms
16-
- Renesas RZ/V2M Evaluation Board Kit
17-
18-
## Introduction
19-
### Overview
20-
This compiler stack is an extension of the DRP-AI Translator to the TVM backend. CPU and DRP-AI can work together for the inference processing of the AI models.
21-
22-
<img src=./img/tool_stack.png width=350>
23-
24-
25-
### File Configuration
26-
| Directory | Details |
27-
|:---|:---|
28-
|tutorials |Sample compile script|
29-
|apps |Sample inference application on the target board|
30-
|setup | Setup scripts for building a TVM environment |
31-
|obj |Pre-build runtime binaries|
32-
|tvm | TVM repository from github |
33-
|3rd party | 3rd party tools |
34-
|how-to |Sample to solve specific problems, i.e., How to run validation between x86 and DRP-AI|
35-
36-
37-
## Installation
38-
### Requirements
39-
Required software is listed below.
40-
41-
- Ubuntu 18.04
42-
- Python 3.6
43-
- git
44-
- [DRP-AI Translator v1.60](#drp-ai-translator)
45-
- [RZ/V2M Linux Package v1.2.0](#rzv-software)
46-
- [RZ/V2M DRP-AI Support Package v6.00](#rzv-software)
47-
48-
##### DRP-AI Translator
49-
Download the DRP-AI Translator v1.60 from the Software section in [RZ/V2M Software](https://www.renesas.com/rzv2m) and install it by following the *User's Manual*.
50-
51-
##### RZ/V Software
52-
Download the *RZ/V2M Linux Package* and *DRP-AI Support Package* from the Software section in [RZ/V2M Software](https://www.renesas.com/rzv2m) and **build image/SDK** according to the *DRP-AI Support Package Release Note* *1.
53-
54-
*1 OpenCV library is required to run application example provided in this repository ([Application Example](./apps)).
55-
To install OpenCV, please see [How to install OpenCV](./apps#how-to-install-opencv-to-linux-package) in Application Example page.
56-
57-
### Installing DRP-AI TVM[^1]
58-
Before installing DRP-AI TVM[^1], please install the software listed in [Requirements](#requirements) and build image/SDK with RZ/V2M Linux Package and DRP-AI Support Package.
59-
60-
#### 1. Clone the respository.
61-
```sh
62-
git clone --recursive -b v0.1 <git url.> drp-ai_tvm
63-
```
64-
65-
#### 2. Set environment variables.
66-
Run the following commands to set environment variables.
67-
Note that environment variables must be set every time when opening the terminal.
68-
```sh
69-
export TVM_HOME=<.../drp-ai_tvm>/tvm # Your own path to the cloned repository.
70-
export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
71-
export SDK=</opt/poky/2.4.3> # Your own RZ/V2M Linux SDK path.
72-
export TRANSLATOR=<.../drp-ai_translator/> # Your own DRP-AI Translator path.
73-
```
74-
#### 3. Install the minimal pre-requisites.
75-
```sh
76-
# Install packagess
77-
apt update
78-
DEBIAN_FRONTEND=noninteractive apt install -y software-properties-common
79-
add-apt-repository ppa:ubuntu-toolchain-r/test
80-
apt update
81-
DEBIAN_FRONTEND=noninteractive apt install -y build-essential cmake \
82-
libomp-dev libgtest-dev libgoogle-glog-dev libtinfo-dev zlib1g-dev libedit-dev \
83-
libxml2-dev llvm-8-dev g++-9 gcc-9 wget
84-
85-
apt-get install -y python3-pip
86-
pip3 install --upgrade pip
87-
apt-get -y install unzip vim
88-
pip3 install decorator attrs scipy numpy pytest onnx==1.9.0
89-
pip3 install torch==1.8.0 torchvision==0.9.0
90-
```
91-
If an already recent gcc/g++ compiler is already the default one in the system then the next step is not necessary. If the system default version is **9 or higher** then we are ready. If that is not the case then we can make sure that gcc/g++-9 will be the default compiler by running:
92-
```sh
93-
# Update gcc to 9.4
94-
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 10
95-
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 20
96-
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 10
97-
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 20
98-
update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
99-
update-alternatives --set cc /usr/bin/gcc
100-
update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30
101-
update-alternatives --set c++ /usr/bin/g++
102-
update-alternatives --set gcc "/usr/bin/gcc-9"
103-
update-alternatives --set g++ "/usr/bin/g++-9"
104-
```
105-
Installing ONNX Runtime Library from precompiled release package.
106-
```sh
107-
# Install onnx runtime
108-
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz -O /tmp/onnxruntime.tar.gz
109-
tar -xvzf /tmp/onnxruntime.tar.gz -C /tmp/
110-
mv /tmp/onnxruntime-linux-x64-1.8.1/ /opt/
111-
```
112-
Setup DRP-AI TVM[^1] environment.
113-
```sh
114-
cd <.../drp-ai_tvm>
115-
bash setup/make_drp_env.sh
116-
```
117-
118-
----
119-
120-
## Deploy AI models on DRP-AI
121-
![drawing](./img/deploy_flow.png)
122-
123-
To deploy the AI model to DRP-AI on the target board, you need to compile the model with DRP-AI TVM[^1] to generate Runtime Model Data (Compile).
124-
SDK generated from RZ/V Linux Package and DRP-AI Support Package is required to compile the model.
125-
126-
After compiled the model, you need to copy the file to the target board (Deploy).
127-
You also need to copy the C++ inference application and DRP-AI TVM[^1] Runtime Library to run the AI model inference.
128-
Moreover, since DRP-AI TVM[^1] does not support pre/post-processing of AI inference, OpenCV Library is essential.
129-
130-
Following pages show the example to compile the ResNet18 model and run it on the target board.
131-
132-
### Compile model with DRP-AI TVM[^1]
133-
Please see [Tutorial](./tutorials).
134-
135-
### Run inference on board
136-
Please see [Application Example](./apps) page.
137-
138-
### How-to
139-
Pages above only show the example for ResNet.
140-
To find more examples, please see [How-to](./how-to) page.
141-
It includes the sample to solve specific problems, i.e.;
142-
- how to run application with camera;
143-
- validation between x86 and DRP-AI, etc.
144-
145-
----
146-
For any enquiries, please contact Renesas.
147-
148-
[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework.
149-
1+
# Extension package of TVM Deep Learning Complier for Renesas DRP-AI accelerators powered by EdgeCortix MERA&trade;
2+
3+
[TVM Documentation](https://tvm.apache.org/docs) |
4+
[TVM Community](https://tvm.apache.org/community) |
5+
[TVM github](https://github.com/apache/tvm) |
6+
7+
8+
DRP-AI TVM[^1] is Machine Learning Compiler plugin for [Apache TVM](https://github.com/apache/tvm/) provided by Renesas Electronics Corporation.
9+
10+
## License
11+
(C) Copyright EdgeCortix, Inc. 2022
12+
(C) Copyright Renesas Electronics Corporation 2022
13+
Contributors Licensed under an Apache-2.0 license.
14+
15+
## Supported Embedded Platforms
16+
- Renesas RZ/V2MA Evaluation Board Kit
17+
18+
## Introduction
19+
### Overview
20+
This compiler stack is an extension of the DRP-AI Translator to the TVM backend. CPU and DRP-AI can work together for the inference processing of the AI models.
21+
22+
<img src=./img/tool_stack.png width=350>
23+
24+
25+
### File Configuration
26+
| Directory | Details |
27+
|:---|:---|
28+
|tutorials |Sample compile script|
29+
|apps |Sample inference application on the target board|
30+
|setup | Setup scripts for building a TVM environment |
31+
|obj |Pre-build runtime binaries|
32+
|tvm | TVM repository from github |
33+
|3rd party | 3rd party tools |
34+
|how-to |Sample to solve specific problems, i.e., How to run validation between x86 and DRP-AI|
35+
36+
37+
## Installation
38+
### Requirements
39+
Requirements are listed below.
40+
- OS : Ubuntu 20.04
41+
- Python : 3.8
42+
- Package : git
43+
- Evaluation Board: RZ/V2MA EVK
44+
- Related Software Version:
45+
- [DRP-AI Translator v1.80](#drp-ai-translator)
46+
- [RZ/V2MA Linux Package v1.0.0](#rzv-software)
47+
- [RZ/V2MA DRP-AI Support Package v7.20](#rzv-software)
48+
49+
##### DRP-AI Translator
50+
Download the DRP-AI Translator v1.80 from the Software section in [DRP-AI](https://www.renesas.com/application/key-technology/artificial-intelligence/ai-accelerator-drp-ai#software) and install it by following the *User's Manual*.
51+
52+
##### RZ/V Software
53+
Download the *RZ/V2MA Linux Package* and *DRP-AI Support Package* from [Renesas Web Page](https://www.renesas.com/application/key-technology/artificial-intelligence/ai-accelerator-drp-ai) and **build image/SDK** according to the *DRP-AI Support Package Release Note* *1.
54+
55+
### Installing DRP-AI TVM[^1]
56+
Before installing DRP-AI TVM[^1], please install the software listed in [Requirements](#requirements) and build image/SDK with RZ/V2MA Linux Package and DRP-AI Support Package.
57+
58+
#### 1. Clone the respository.
59+
```sh
60+
git clone --recursive https://github.com/renesas-rz/rzv_drp-ai_tvm.git drp-ai_tvm
61+
```
62+
63+
#### 2. Set environment variables.
64+
Run the following commands to set environment variables.
65+
Note that environment variables must be set every time when opening the terminal.
66+
```sh
67+
export TVM_HOME=<.../drp-ai_tvm>/tvm # Your own path to the cloned repository.
68+
export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
69+
export SDK=</opt/poky/3.1.14> # Your own RZ/V2MA Linux SDK path.
70+
export TRANSLATOR=<.../drp-ai_translator/> # Your own DRP-AI Translator path.
71+
```
72+
#### 3. Install the minimal pre-requisites.
73+
```sh
74+
# Install packagess
75+
apt update
76+
DEBIAN_FRONTEND=noninteractive apt install -y software-properties-common
77+
add-apt-repository ppa:ubuntu-toolchain-r/test
78+
apt update
79+
DEBIAN_FRONTEND=noninteractive apt install -y build-essential cmake \
80+
libomp-dev libgtest-dev libgoogle-glog-dev libtinfo-dev zlib1g-dev libedit-dev \
81+
libxml2-dev llvm-8-dev g++-9 gcc-9 wget
82+
83+
apt-get install -y python3-pip
84+
pip3 install --upgrade pip
85+
apt-get -y install unzip vim
86+
pip3 install decorator attrs scipy numpy pytest onnx==1.9.0
87+
pip3 install torch==1.8.0 torchvision==0.9.0
88+
```
89+
90+
Installing ONNX Runtime Library from precompiled release package.
91+
```sh
92+
# Install onnx runtime
93+
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz -O /tmp/onnxruntime.tar.gz
94+
tar -xvzf /tmp/onnxruntime.tar.gz -C /tmp/
95+
mv /tmp/onnxruntime-linux-x64-1.8.1/ /opt/
96+
```
97+
Setup DRP-AI TVM[^1] environment.
98+
```sh
99+
cd <.../drp-ai_tvm>
100+
bash setup/make_drp_env.sh
101+
```
102+
103+
----
104+
105+
## Deploy AI models on DRP-AI
106+
![drawing](./img/deploy_flow.png)
107+
108+
To deploy the AI model to DRP-AI on the target board, you need to compile the model with DRP-AI TVM[^1] to generate Runtime Model Data (Compile).
109+
SDK generated from RZ/V Linux Package and DRP-AI Support Package is required to compile the model.
110+
111+
After compiled the model, you need to copy the file to the target board (Deploy).
112+
You also need to copy the C++ inference application and DRP-AI TVM[^1] Runtime Library to run the AI model inference.
113+
114+
Following pages show the example to compile the ResNet18 model and run it on the target board.
115+
116+
### Compile model with DRP-AI TVM[^1]
117+
Please see [Tutorial](./tutorials).
118+
119+
### Run inference on board
120+
Please see [Application Example](./apps) page.
121+
122+
### How-to
123+
Pages above only show the example for ResNet.
124+
To find more examples, please see [How-to](./how-to) page.
125+
It includes the sample to solve specific problems, i.e.;
126+
- how to run application with camera;
127+
- validation between x86 and DRP-AI, etc.
128+
129+
### Error List
130+
If error occurred at compile/runtime operation, please refer [error list](./docs/Error_List.md).
131+
132+
## Support
133+
If you have any questions, please contact [Renesas Technical Support](https://www.renesas.com/support).
134+
135+
----
136+
For any enquiries, please contact Renesas.
137+
138+
[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework.

apps/CMakeLists.txt

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,8 @@ include_directories(${TVM_ROOT}/3rdparty/dmlc-core/include)
88
include_directories(${TVM_ROOT}/3rdparty/compiler-rt)
99

1010
set(TVM_RUNTIME_LIB ${TVM_ROOT}/build_runtime/libtvm_runtime.so)
11-
set(SRC tutorial_app.cpp MeraDrpRuntimeWrapper.cpp)
11+
set(SRC tutorial_app.cpp MeraDrpRuntimeWrapper.cpp PreRuntime.cpp)
1212
set(EXE_NAME tutorial_app)
1313

1414
add_executable(${EXE_NAME} ${SRC})
1515
target_link_libraries(${EXE_NAME} ${TVM_RUNTIME_LIB})
16-
17-
find_package(OpenCV REQUIRED)
18-
if(OpenCV_FOUND)
19-
target_include_directories(${EXE_NAME} PUBLIC ${OpenCV_INCLUDE_DIRS})
20-
target_link_libraries(${EXE_NAME} ${OpenCV_LIBS})
21-
endif()

0 commit comments

Comments
 (0)