Skip to content

Commit

Permalink
Updated to version 2.2.0.
Browse files Browse the repository at this point in the history
  • Loading branch information
hiroyuki-sakamoto committed Feb 29, 2024
1 parent 2c8a6ca commit 5c9e0de
Show file tree
Hide file tree
Showing 170 changed files with 33,554 additions and 1,771 deletions.
6 changes: 3 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python3-pip
RUN locale-gen en_US.UTF-8
RUN pip3 install --upgrade pip
RUN pip3 install decorator attrs scipy numpy==1.23.5 pytest
RUN pip3 install torch==1.8.0 torchvision==0.9.0 tensorflow tflite psutil
RUN pip3 install torch==1.8.0 torchvision==0.9.0 tensorflow tflite psutil typing-extensions==4.5.0

# Install onnxruntime
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz -O /tmp/onnxruntime.tar.gz \
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v1.16.1/onnxruntime-linux-x64-1.16.1.tgz -O /tmp/onnxruntime.tar.gz \
&& tar -xvzf /tmp/onnxruntime.tar.gz -C /tmp/ \
&& mv /tmp/onnxruntime-linux-x64-1.8.1/ /opt/
&& mv /tmp/onnxruntime-linux-x64-1.16.1/ /opt/

# Install SDK
COPY ./poky*.sh /opt
Expand Down
10 changes: 5 additions & 5 deletions DockerfileV2H
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,14 @@ RUN cd /opt && yes "" | ./poky*.sh
RUN rm /opt/poky*.sh

# Install DRP-AI Translator
COPY ./DRP-AI_Translator_i8-v*-Linux-x86_64-Install /opt
RUN chmod a+x /opt/DRP-AI_Translator_i8-v*-Linux-x86_64-Install
RUN cd /opt && yes | ./DRP-AI_Translator_i8-v*-Linux-x86_64-Install
RUN rm /opt/DRP-AI_Translator_i8-v*-Linux-x86_64-Install
COPY ./DRP-AI_Translator_i8-*-Linux-x86_64-Install /opt
RUN chmod a+x /opt/DRP-AI_Translator_i8-*-Linux-x86_64-Install
RUN cd /opt && yes | ./DRP-AI_Translator_i8-*-Linux-x86_64-Install
RUN rm /opt/DRP-AI_Translator_i8-*-Linux-x86_64-Install

# Clone repository
ENV TVM_ROOT="/drp-ai_tvm"
RUN git clone --recursive -b v2.1.0 https://github.com/renesas-rz/rzv_drp-ai_tvm.git ${TVM_ROOT}
RUN git clone --recursive https://github.com/renesas-rz/rzv_drp-ai_tvm.git ${TVM_ROOT}

# Set environment variables
ENV TVM_ROOT=${TVM_ROOT}
Expand Down
37 changes: 25 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,16 @@ Contributors Licensed under an Apache-2.0 license.
- Renesas RZ/V2L Evaluation Board Kit ([How to get](https://www.renesas.com/document/gde/rzv2l-contents-guide))
- Renesas RZ/V2M Evaluation Board Kit ([How to get](https://www.renesas.com/document/gde/rzv2m-contents-guide))
- Renesas RZ/V2MA Evaluation Board Kit ([How to get](https://www.renesas.com/document/gde/rzv2ma-contents-guide))
- Renesas RZ/V2H Evaluation Board Kit

## Introduction
### Overview
This compiler stack is an extension of the DRP-AI Translator to the TVM backend. CPU and DRP-AI can work together for the inference processing of the AI models.

<img src=./img/tool_stack.png width=350>

<img src=./img/tool_stack.png width=350>

### File Configuration

| Directory | Details |
|:---|:---|
|tutorials |Sample compile script|
Expand All @@ -32,15 +33,15 @@ This compiler stack is an extension of the DRP-AI Translator to the TVM backend.
|obj |Pre-build runtime binaries|
|docs |Documents, i.e., Model list and API list|
|img |Image files used in this document|
|tvm | TVM repository from github |
|3rdparty | 3rd party tools |
|tvm | TVM repository from GitHub |
|3rdparty | 3rd party tools |
|how-to |Sample to solve specific problems, i.e., How to run validation between x86 and DRP-AI|
|pruning | Sample scripts to prune the models with DRP-AI Extension Package |


## Installation
- [Installing DRP-AI TVM](./setup/README.md#installing-drp-ai-tvm1)
- [Installing DRP-AI TVM for RZ/V2H](./setup/SetupV2H.md#hogehoge)
- [Installing DRP-AI TVM with Docker](./setup/README.md#installing-drp-ai-tvm1-with-docker)
- [Installing DRP-AI TVM (RZ/V2L, RZ/V2M, RZ/V2MA)](./setup/README.md)
- [Installing DRP-AI TVM (RZ/V2H)](./setup/SetupV2H.md)

## Deploy AI models on DRP-AI
### Video
Expand All @@ -56,19 +57,31 @@ SDK generated from RZ/V Linux Package and DRP-AI Support Package is required to
After compiled the model, you need to copy the file to the target board (Deploy).
You also need to copy the C++ inference application and DRP-AI TVM[^1] Runtime Library to run the AI model inference.

<img src=./img/deploy_flow.png width=500>
| RZ/V2L, RZ/V2M, RZ/V2MA |
|:---|
|<img src=./img/deploy_flow.png width=500>|

| RZ/V2H |
|:---|
|<img src=./img/deploy_flow_V2H.png width=500>|


Following pages show the example to compile the ResNet18 model and run it on the target board.

### [Option] Prune the model with DRP-AI Extension Package
- [What is Pruning and Quantization?](./pruning/README.md)
- [Installing DRP-AI Extention Package](./pruning/setup/README.md)
- [Pruning and retraining by DRP-AI Extension Package](./pruning/how-to/torchvision_resnet50/README.md)

### Compile model with DRP-AI TVM[^1]

- [Compiling model for DRP-AI TVM](./tutorials)
- [Compiling model for DRP-AI TVM(RZ/V2H)](./tutorials/tutorial_RZV2H.md)
- [Compiling model for DRP-AI TVM (RZ/V2L, RZ/V2M, RZ/V2MA)](./tutorials)
- [Compiling model for DRP-AI TVM (RZ/V2H)](./tutorials/tutorial_RZV2H.md)

### Run inference on board

- [Application Example for DRP-AI TVM](./apps)
- [Application Example for DRP-AI TVM(RZ/V2H)](./apps/build_appV2H.md)
- [Application Example for DRP-AI TVM (RZ/V2L, RZ/V2M, RZ/V2MA)](./apps)
- [Application Example for DRP-AI TVM (RZ/V2H)](./apps/build_appV2H.md)

## Sample Application
To find more AI examples, please see [How-to](./how-to) page.
Expand Down
9 changes: 3 additions & 6 deletions apps/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,13 @@ include_directories(${TVM_ROOT}/include)
include_directories(${TVM_ROOT}/3rdparty/dlpack/include)
include_directories(${TVM_ROOT}/3rdparty/dmlc-core/include)
include_directories(${TVM_ROOT}/3rdparty/compiler-rt)
include_directories($ENV{SDK}/sysroots/aarch64-poky-linux/usr/include/opencv4)
#include_directories($ENV{SDK}/sysroots/aarch64-poky-linux/usr/include/opencv4)

set(TVM_RUNTIME_LIB ${TVM_ROOT}/build_runtime/libtvm_runtime.so)
if(OCV)
set(SRC tutorial_app.cpp MeraDrpRuntimeWrapper.cpp PreRuntimeOcv.cpp)
find_package(OpenCV REQUIRED)
elseif(V2H)
if(V2H)
set(SRC tutorial_app.cpp MeraDrpRuntimeWrapper.cpp PreRuntimeV2H.cpp)
else()
set(SRC tutorial_app.cpp MeraDrpRuntimeWrapper.cpp PreRuntime.cpp)
set(SRC tutorial_app_v2ml.cpp MeraDrpRuntimeWrapper.cpp PreRuntime.cpp)
endif()


Expand Down
20 changes: 20 additions & 0 deletions apps/MeraDrpRuntimeWrapper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,10 @@ void MeraDrpRuntimeWrapper::Run() {
mod.GetFunction("run")();
}

void MeraDrpRuntimeWrapper::Run(int freq_index) {
mod.GetFunction("run")(freq_index);
}

void MeraDrpRuntimeWrapper::ProfileRun(const std::string& profile_table, const std::string& profile_csv) {
tvm::runtime::PackedFunc profile = mod.GetFunction("profile");
tvm::runtime::Array<tvm::runtime::profiling::MetricCollector> collectors;
Expand All @@ -152,6 +156,22 @@ void MeraDrpRuntimeWrapper::ProfileRun(const std::string& profile_table, const s
ofs_csv.close();
}

void MeraDrpRuntimeWrapper::ProfileRun(const std::string& profile_table, const std::string& profile_csv, int freq_index) {
tvm::runtime::PackedFunc profile = mod.GetFunction("profile");
tvm::runtime::Array<tvm::runtime::profiling::MetricCollector> collectors;
tvm::runtime::profiling::Report report = profile(collectors, freq_index);

std::string rep_table = report->AsTable();
std::ofstream ofs_table (profile_table, std::ofstream::out);
ofs_table << rep_table << std::endl;
ofs_table.close();

std::string rep_csv = report->AsCSV();
std::ofstream ofs_csv (profile_csv, std::ofstream::out);
ofs_csv << rep_csv << std::endl;
ofs_csv.close();
}

int MeraDrpRuntimeWrapper::GetNumInput(std::string model_dir) {
// TVM does not support api to get number input of model.
// This function calculate input number base on convention
Expand Down
2 changes: 2 additions & 0 deletions apps/MeraDrpRuntimeWrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,9 @@ class MeraDrpRuntimeWrapper {
template <typename T>
void SetInput(int input_index, const T* data_ptr);
void Run();
void Run(int freq_index);
void ProfileRun(const std::string& profile_table, const std::string& profile_csv);
void ProfileRun(const std::string& profile_table, const std::string& profile_csv, int freq_index);
int GetNumInput(std::string model_dir);
InOutDataType GetInputDataType(int index);
int GetNumOutput();
Expand Down
Loading

0 comments on commit 5c9e0de

Please sign in to comment.