-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update to Version 1.0.3. Add AI Sample Applications (Hand Landmark Lo…
…calization, Face Expression Recognition, Classification) in how-to.
- Loading branch information
1 parent
c8c6a04
commit c500ac6
Showing
104 changed files
with
3,519 additions
and
4,889 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +0,0 @@ | ||
how-to/sample_app/exe/hrnet_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text | ||
how-to/sample_app/exe/yolov2_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text | ||
how-to/sample_app/exe/yolov3_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# Classification | ||
|
||
### Model: [GoogleNet](#model-information) | ||
Sample application code and its execution environment are provided in **[here](../../../../sample_app)**. | ||
|
||
## Overview | ||
This page explains about Classification in the [sample application](../../../../sample_app) for DRP-AI TVM[^1]. | ||
|
||
<img src=./img/googlenet.jpg width=500> | ||
|
||
## Model Information | ||
- GoogleNet: [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet) googlenet-9.onnx | ||
Dataset: [ILSVRC2014](https://image-net.org/challenges/LSVRC/2014/) | ||
Input size: 1x3x224x224 | ||
Output size: 1x1000 | ||
|
||
### How to compile the model | ||
To run the Classification, `googlenet_onnx` Model Object is required. | ||
Follow the instuction below to prepare the Model Object. | ||
|
||
1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). | ||
2. Download the onnx file from [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet). | ||
3. Place the onnx file in `$TVM_HOME/../tutorials`. | ||
4. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x438E0000`. | ||
5. Run the with the command below. | ||
```sh | ||
$ python3 compile_onnx_model.py \ | ||
-i data_0 \ | ||
-s 1,3,224,224 \ | ||
-o googlenet_onnx \ | ||
googlenet-9.onnx | ||
``` | ||
6. Confirm that `googlenet_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. | ||
7. Before running the application, make sure to copy the `googlenet_onnx` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. | ||
|
||
|
||
## Processing Details | ||
### DRP-AI mode | ||
- Source Code: [tvm_drpai_googlenet.cpp](../../../src/recognize/googlenet/tvm_drpai_googlenet.cpp) | ||
|
||
Followings are processing details if user selected "GoogleNet (DRP-AI)". | ||
|
||
#### Pre-processing | ||
Pre-processing is done by DRP-AI Pre-processing Runtime and CPU. | ||
|
||
| Function | Details | | ||
|:---|:---| | ||
|conv_yuv2rgb |Convert YUY2 to RGB processed by DRP-AI Pre-processing Runtime.| | ||
|resize |Resize to 224x224 processed by DRP-AI Pre-processing Runtime.| | ||
|cast_to_fp16 | Cast data to FP16 for DRP-AI processed by DRP-AI Pre-processing Runtime.| | ||
|normalize | Normalize pixel values with mean values of {123.68, 116.779, 103.939}</br>processed by DRP-AI Pre-processing Runtime.| | ||
|transpose | Transpose HWC to CHW order processed by DRP-AI Pre-processing Runtime. | | ||
|cast_fp16_fp32 | Cast FP16 data to FP32 for DRP-AI TVM[^1] input</br> processed by DRP-AI Pre-processing Runtime.| | ||
|rgb2bgr | Convert RGB to BGR processed by CPU.| | ||
|
||
#### Inference | ||
The Object files `googlenet_onnx` is generated from ONNX Model Zoo GoogleNet pre-trained model as described in [Model Information](#model-information). | ||
|
||
#### Post-processing | ||
Post-processing is processed by CPU. | ||
|
||
--- | ||
[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
85 changes: 85 additions & 0 deletions
85
how-to/sample_app/docs/emotion_recognition/emotion_ferplus/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
# Emotion Recognition | ||
|
||
### Model: [Emotion FERPlus](#model-information) | ||
Sample application code and its execution environment are provided in **[here](../../../../sample_app)**. | ||
|
||
## Overview | ||
This page explains about Emotion Recognition in the [sample application](../../../../sample_app) for DRP-AI TVM[^1]. | ||
|
||
<img src=./img/emotionfp.jpg width=500> | ||
|
||
## Model Information | ||
- Emotion FERPlus: [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) emotion-ferplus-8.onnx | ||
Dataset: See [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus#dataset). | ||
Input size: 1x1x64x64 | ||
Output size: 1x8 | ||
|
||
Emotion FERPlus can only classify the face expression of single person. | ||
To enable multiple face emotion recognition, this application used [UltraFace](../../../docs/face_detection/ultraface/) as pre-processing. | ||
To see more details on UltraFace, please see [Face Detection](../../../docs/face_detection/ultraface/). | ||
|
||
|
||
### How to compile the model | ||
To run the Emotion Recognition, `emotion_fp_onnx` Model Object and `ultraface_onnx` Model Object are required. | ||
Follow the instuction below to prepare the `emotion_fp_onnx` Model Object. | ||
For `ultraface_onnx` Model Object, please refer to [Face Detection](../../../docs/face_detection/ultraface/). | ||
|
||
|
||
1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). | ||
2. Download the onnx file from [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus). | ||
3. Place the onnx file in `$TVM_HOME/../tutorials`. | ||
4. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x442d0000`. | ||
Note that the value **must NOT** be default value `0x438E0000` in order to avoid conflict with UltraFace Model Object. | ||
5. Run the with the command below. | ||
```sh | ||
$ python3 compile_onnx_model.py \ | ||
-i Input3 \ | ||
-s 1,1,64,64 \ | ||
-o emotion_fp_onnx \ | ||
emotion-ferplus-8.onnx | ||
``` | ||
6. Confirm that `emotion_fp_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. | ||
7. Before running the application, make sure to copy the `emotion_fp_onnx` directory and `ultraface_onnx` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. | ||
|
||
|
||
## Processing Details | ||
### DRP-AI mode | ||
- Source Code: [tvm_drpai_emotionfp.cpp](../../../src/recognize/emotionfp/tvm_drpai_emotionfp.cpp) | ||
|
||
Followings are processing details if user selected "Emotion FERPlus (DRP-AI)". | ||
|
||
#### Pre-processing | ||
As a pre-processing, Face Detection model, UltraFace, is used. | ||
To see details, please refer to [Face Detection Processing Details](../../../docs/face_detection/ultraface/README.md#processing-details). | ||
|
||
For each face detected, following pre-processing is done by CPU.. | ||
Note that some of them are processed by C++ OpenCV. | ||
|
||
| Function | Details | | ||
|:---|:---| | ||
|Crop | Crop YUYV image. Processed by CPU. | | ||
|cvtColor | C++ OpenCV. Convert YUY2 to Grayscale.| | ||
|resize |C++ OpenCV. Resize to 64x64.| | ||
|transpose |Transpose HWC to CHW order. Processed by CPU.| | ||
|
||
#### Inference | ||
The Object files `emotion_fp_onnx` is generated from ONNX Model Zoo Emotion FERPlus pre-trained model as described in [Model Information](#model-information). | ||
|
||
#### Post-processing | ||
Post-processing is processed by CPU. | ||
|
||
|
||
#### About processing time | ||
Details of processing time, which is displayed on web browser, are as follow. | ||
|
||
| Processing | Details | | ||
|:---|:---| | ||
|Pre-processing | Sum of time taken for following operations. </br>- Face Detection pre-processing, inference and postprocessing</br>- Emotion recognition pre-processing for all detected faces. | | ||
|Inferene | Time taken to run inference for all detected faces.| | ||
|Post-processing |Time taken to run post-processing for all detected faces.| | ||
|
||
For example, if there are two bounding box detected in face detection, emotion recognition will be carried out for two times. | ||
Therefore, inference time will be approximately two times by single inference processing time and same applies for other processing time. | ||
|
||
--- | ||
[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. |
Binary file added
BIN
+112 KB
how-to/sample_app/docs/emotion_recognition/emotion_ferplus/img/emotionfp.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file modified
BIN
-92 KB
(51%)
how-to/sample_app/docs/face_detection/ultraface/img/ultraface.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.