diff --git a/.gitattributes b/.gitattributes index 2c20685..e69de29 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1,3 +0,0 @@ -how-to/sample_app/exe/hrnet_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text -how-to/sample_app/exe/yolov2_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text -how-to/sample_app/exe/yolov3_onnx/deploy.so filter=lfs diff=lfs merge=lfs -text diff --git a/apps/README.md b/apps/README.md index 7fb6262..15bb14f 100644 --- a/apps/README.md +++ b/apps/README.md @@ -190,7 +190,7 @@ Provided fixed sequence is as follows. | No. | Function | Details | |:---|:---|:---| -| 1 |conv_yuv2rgb |Convert YUY2 to RGB.
Default input size is 4196x2160.| +| 1 |conv_yuv2rgb |Convert YUY2 to RGB.
Default input size is 4096x2160.| | 2 |resize |Resize to specified size.
Default is 640x640. | | 3 |cast_to_fp16 | Cast data to FP16 for DRP-AI.| | 4 |normalize | Normalize pixel values with mean and standard deviation.
Default value are mean=[0, 0, 0] and std=[1/255, 1/255, 1/255].| diff --git a/docs/Model_List.md b/docs/Model_List.md index 06db616..1f5c060 100644 --- a/docs/Model_List.md +++ b/docs/Model_List.md @@ -9,115 +9,172 @@ Below is a list of AI models that Renesas has verified for conversion with the D * RZ/V2MA Linux Package v1.0.0 * RZ/V2MA DRP-AI Support Package v7.20 -| AI model | Task | Format | Inference time
(CPU only) | Inference time
(CPU+DRP-AI) | -| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | -------------------------------------------------------- | ---------------------------- | ------------------------------ | -| ResNet18-v1 | Classification | ONNX | 488ms | 17ms | -| ResNet18-v2 | Classification | ONNX | 487ms | 19ms | -| ResNet34-v1 | Classification | ONNX | 870ms | 27ms | -| ResNet34-v2 | Classification | ONNX | 890ms | 29ms | -| ResNet50-v1 | Classification | ONNX | 1358ms | 36ms | -| ResNet50-v2 | Classification | ONNX | 1662ms | 46ms | -| ResNet101-v1 | Classification | ONNX | 2479ms | 56ms | -| ResNet101-v2 | Classification | ONNX | 2777ms | 70ms | -| MobileNetV2 | Classification | ONNX | 224ms | 21ms | -| SqueezeNet1.1-7 | Classification | ONNX | 142ms | 8ms | -| DenseNet9 | Classification | ONNX | 1345ms | 149ms | -| Inception-v1 | Classification | ONNX | 738ms | 649ms | -| Inception-v2 | Classification | ONNX | 1165ms | 128ms | -| YOLOv2 | Object Detection | ONNX | 6688ms | 81ms | -| YOLOv3 | Object Detection | ONNX | 15507ms | 222ms | -| YOLOv5l | Object Detection | ONNX | 13575ms | 222ms | -| HRNet | Body Keypiont 2D | ONNX | 3639ms | 61ms | -| ResNet18 | Classification | PyTorch | 488ms | 18ms | -| ResNet34 | Classification | PyTorch | 897ms | 27ms | -| ResNet50 | Classification | PyTorch | 1619ms | 38ms | -| ResNet101 | Classification | PyTorch | 2760ms | 58ms | -| ResNeXt-50-32x4d | Classification | PyTorch | 2038ms | 504ms | -| MobileNetV2 | Classification | PyTorch | 226ms | 21ms | -| SqueezeNet1_1 | Classification | PyTorch | 142ms | 41ms | -| DenseNet-121 | Classification | PyTorch | 1436ms | 307ms | -| DenseNet-161 | Classification | PyTorch | 4072ms | 1172ms | -| GoogleNet | Classification | PyTorch | 758ms | 153ms | -| MnasNet0_5 | Classification | PyTorch | 102ms | 37ms | -| DeepLabv3-resnet50 | Segmentation | PyTorch | 15467ms | 172ms | -| DeepLabv3-resnet101 | Segmentation | PyTorch | 21524ms | 274ms | -| FCN_resnet101 | Segmentation | PyTorch | 18151ms | 265ms | -| DeepPose | Body Keypoint 2D | PyTorch | 2239ms | 36ms | -| HRNetV2 | Face Detection 2D | PyTorch | 1936ms | 52ms | -| HRNetV2 DarkPose | Face Detection 2D | PyTorch | 3215ms | 67ms | -| [ConvNeXt atto](https://github.com/rwightman/pytorch-image-models#aug-15-2022) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 565ms | 397ms | -| [ConvNeXt femto](https://github.com/rwightman/pytorch-image-models#aug-5-2022) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 697ms | 498ms | -| [ConvNeXt femto ols](https://github.com/rwightman/pytorch-image-models#aug-5-2022) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 717ms | 488ms | -| [CSP-Darknet](https://rwightman.github.io/pytorch-image-models/models/csp-darknet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1938ms | 96ms | -| [CSP-ResNet](https://rwightman.github.io/pytorch-image-models/models/csp-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1372ms | 68ms | -| [CSP-ResNeXt](https://rwightman.github.io/pytorch-image-models/models/csp-resnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1645ms | 484ms | -| [Darknet-53](https://github.com/rwightman/pytorch-image-models#july-8-2022) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3471ms | 51ms | -| [Darknet-aa53](https://github.com/rwightman/pytorch-image-models#july-27-2022) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2907ms | 94ms | -| [DenseNet121](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1438ms | 246ms | -| [DenseNet161](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 4050ms | 1102ms | -| [DenseNet169](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1913ms | 406ms | -| [DenseNet201](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2856ms | 843ms | -| [DenseNet Blur 121d](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1568ms | 263ms | -| DLA(Dense Layer Aggregation)102x | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3090ms | 850ms | -| DLA102x2 | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 4820ms | 1523ms | -| DLA46x_c | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 374ms | 108ms | -| DLA60x_c | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 403ms | 120ms | -| [DPN(Dual Path Network)107](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 11043ms | 2257ms | -| [DPN68](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1448ms | 651ms | -| [DPN68b](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1533ms | 622ms | -| [ECA-ResNet101d](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2935ms | 412ms | -| [ECA-ResNet26t](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1141ms | 147ms | -| [ECA-ResNet50d](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1732ms | 255ms | -| [ECA-ResNet50t](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1771ms | 253ms | -| [ECA-ResNet light](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1569ms | 194ms | -| [EfficientNet Edge Large](https://rwightman.github.io/pytorch-image-models/models/efficientnet/https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2138ms | 198ms | -| [pruned EfficientNet Edge Large](https://rwightman.github.io/pytorch-image-models/models/efficientnet/https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2128ms | 198ms | -| [EfficientNet Edge Medium](https://rwightman.github.io/pytorch-image-models/models/efficientnet/https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1407ms | 161ms | -| [EfficientNet Edge Small](https://rwightman.github.io/pytorch-image-models/models/efficientnet/https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 942ms | 126ms | -| [pruned EfficientNet Edge Small](https://rwightman.github.io/pytorch-image-models/models/efficientnet/https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 942ms | 125ms | -| [EfficientNet Lite0](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 295ms | 86ms | -| [Ensemble Adversarial Inception ResNet v2](https://rwightman.github.io/pytorch-image-models/models/ensemble-adversarial/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3374ms | 1739ms | -| [ESE-VoVNet 19-dw](https://rwightman.github.io/pytorch-image-models/models/ese-vovnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 734ms | 80ms | -| [ESE-VoVNet 39b](https://rwightman.github.io/pytorch-image-models/models/ese-vovnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3765ms | 114ms | -| [FBNet-C](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 334ms | 105ms | -| [FBNetV3-B](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 434ms | 305ms | -| [FBNetV3-D](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 466ms | 259ms | -| [FBNetV3-G](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 893ms | 570ms | -| Global Context Resnet50t (gcresnet50t) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1708ms | 165ms | -| GPU-Efficient ResNet Large (gernet_l) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1737ms | 35ms | -| GPU-Efficient ResNet Middle (gernet_m) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1493ms | 27ms | -| GPU-Efficient ResNet Small (gernet_s) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 353ms | 13ms | -| GhostNet-1.0x | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 180ms | 87ms | -| [(Gluon) ResNet101 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2745ms | 58ms | -| [(Gluon) ResNet101 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2847ms | 58ms | -| [(Gluon) ResNet101 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2836ms | 88ms | -| [(Gluon) ResNet101 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3163ms | 62ms | -| [(Gluon) ResNet152 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3930ms | 78ms | -| [(Gluon) ResNet152 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3991ms | 78ms | -| [(Gluon) ResNet152 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3996ms | 110ms | -| [(Gluon) ResNet152 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 4312ms | 82ms | -| [(Gluon) ResNet18 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 497ms | 18ms | -| [(Gluon) ResNet34 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 873ms | 27ms | -| [(Gluon) ResNet50 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1638ms | 38ms | -| [(Gluon) ResNet50 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1727ms | 38ms | -| [(Gluon) ResNet50 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 1720ms | 70ms | -| [(Gluon) ResNet50 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2036ms | 42ms | -| [(Gluon) ResNeXt101 32x4d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3667ms | 927ms | -| [(Gluon) ResNeXt101 64x4d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 7244ms | 1703ms | -| [(Gluon) SENet154](https://rwightman.github.io/pytorch-image-models/models/gloun-senet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 9955ms | 1836ms | -| [(Gluon) SE-ResNeXt101 32-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 3776ms | 1142ms | -| [(Gluon) SE-ResNeXt101 64-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 7261ms | 1917ms | -| [(Gluon) SE-ResNeXt50 32-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2086ms | 628ms | -| [(Gluon) Xception65](https://rwightman.github.io/pytorch-image-models/models/gloun-xception/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 4374ms | 140ms | -| HardcoreNAS_A | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 216ms | 138ms | -| HardcoreNAS_B | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 233ms | 128ms | -| HardcoreNAS_C | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 245ms | 150ms | -| HardcoreNAS_D | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 269ms | 153ms | -| HardcoreNAS_E | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 314ms | 188ms | -| HardcoreNAS_F | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 310ms | 186ms | -| [HRNet w18](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 2203ms | 60ms | -| [HRNet w18 small](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [Timm](https://rwightman.github.io/pytorch-image-models) | 868ms | 24ms | +| AI model | Task | Format | Inference time
(CPU only@V2MA) | Inference time
(CPU+DRP-AI@V2MA) | +| ------------------------------------------------------------------------------------------------------------------------- | ----------------- | ------------------------------------------------------------------------ | --------------------------------- | ----------------------------------- | +| ResNet18-v1 | Classification | ONNX | 488ms | 17ms | +| ResNet18-v2 | Classification | ONNX | 487ms | 19ms | +| ResNet34-v1 | Classification | ONNX | 870ms | 27ms | +| ResNet34-v2 | Classification | ONNX | 890ms | 29ms | +| ResNet50-v1 | Classification | ONNX | 1358ms | 36ms | +| ResNet50-v2 | Classification | ONNX | 1662ms | 46ms | +| ResNet101-v1 | Classification | ONNX | 2479ms | 56ms | +| ResNet101-v2 | Classification | ONNX | 2777ms | 70ms | +| MobileNetV2 | Classification | ONNX | 224ms | 21ms | +| SqueezeNet1.1-7 | Classification | ONNX | 142ms | 8ms | +| DenseNet9 | Classification | ONNX | 1345ms | 149ms | +| Inception-v1 | Classification | ONNX | 738ms | 649ms | +| Inception-v2 | Classification | ONNX | 1165ms | 128ms | +| YOLOv2 | Object Detection | ONNX | 6688ms | 81ms | +| YOLOv3 | Object Detection | ONNX | 15507ms | 222ms | +| YOLOv5l | Object Detection | ONNX | 13575ms | 222ms | +| HRNet | Body Keypiont 2D | ONNX | 3639ms | 61ms | +| ResNet18 | Classification | PyTorch | 488ms | 18ms | +| ResNet34 | Classification | PyTorch | 897ms | 27ms | +| ResNet50 | Classification | PyTorch | 1619ms | 38ms | +| ResNet101 | Classification | PyTorch | 2760ms | 58ms | +| ResNeXt-50-32x4d | Classification | PyTorch | 2038ms | 504ms | +| MobileNetV2 | Classification | PyTorch | 226ms | 21ms | +| SqueezeNet1_1 | Classification | PyTorch | 142ms | 41ms | +| DenseNet-121 | Classification | PyTorch | 1436ms | 307ms | +| DenseNet-161 | Classification | PyTorch | 4072ms | 1172ms | +| GoogleNet | Classification | PyTorch | 758ms | 153ms | +| MnasNet0_5 | Classification | PyTorch | 102ms | 37ms | +| DeepLabv3-resnet50 | Segmentation | PyTorch | 15467ms | 172ms | +| DeepLabv3-resnet101 | Segmentation | PyTorch | 21524ms | 274ms | +| FCN_resnet101 | Segmentation | PyTorch | 18151ms | 265ms | +| DeepPose | Body Keypoint 2D | PyTorch | 2239ms | 36ms | +| HRNetV2 | Face Detection 2D | PyTorch | 1936ms | 52ms | +| HRNetV2 DarkPose | Face Detection 2D | PyTorch | 3215ms | 67ms | +| [ConvNeXt atto](https://github.com/rwightman/pytorch-image-models#aug-15-2022) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 565ms | 397ms | +| [ConvNeXt femto](https://github.com/rwightman/pytorch-image-models#aug-5-2022) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 697ms | 498ms | +| [ConvNeXt femto ols](https://github.com/rwightman/pytorch-image-models#aug-5-2022) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 717ms | 488ms | +| [CSP-Darknet](https://rwightman.github.io/pytorch-image-models/models/csp-darknet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1938ms | 96ms | +| [CSP-ResNet](https://rwightman.github.io/pytorch-image-models/models/csp-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1372ms | 68ms | +| [CSP-ResNeXt](https://rwightman.github.io/pytorch-image-models/models/csp-resnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1645ms | 484ms | +| [Darknet-53](https://github.com/rwightman/pytorch-image-models#july-8-2022) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3471ms | 51ms | +| [Darknet-aa53](https://github.com/rwightman/pytorch-image-models#july-27-2022) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2907ms | 94ms | +| [DenseNet121](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1438ms | 246ms | +| [DenseNet161](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4050ms | 1102ms | +| [DenseNet169](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1913ms | 406ms | +| [DenseNet201](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2856ms | 843ms | +| [DenseNet Blur 121d](https://rwightman.github.io/pytorch-image-models/models/densenet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1568ms | 263ms | +| DLA(Dense Layer Aggregation)102x | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3090ms | 850ms | +| DLA102x2 | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4820ms | 1523ms | +| DLA46x_c | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 374ms | 108ms | +| DLA60x_c | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 403ms | 120ms | +| [DPN(Dual Path Network)107](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 11043ms | 2257ms | +| [DPN68](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1448ms | 651ms | +| [DPN68b](https://rwightman.github.io/pytorch-image-models/models/dpn/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1533ms | 622ms | +| [ECA-ResNet101d](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2935ms | 412ms | +| [ECA-ResNet26t](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1141ms | 147ms | +| [ECA-ResNet50d](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1732ms | 255ms | +| [ECA-ResNet50t](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1771ms | 253ms | +| [ECA-ResNet light](https://rwightman.github.io/pytorch-image-models/models/ecaresnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1569ms | 194ms | +| [EfficientNet Edge Large](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2138ms | 198ms | +| [pruned EfficientNet Edge Large](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2128ms | 198ms | +| [EfficientNet Edge Medium](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1407ms | 161ms | +| [EfficientNet Edge Small](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 942ms | 126ms | +| [pruned EfficientNet Edge Small](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 942ms | 125ms | +| [EfficientNet Lite0](https://rwightman.github.io/pytorch-image-models/models/efficientnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 295ms | 86ms | +| [Ensemble Adversarial Inception ResNet v2](https://rwightman.github.io/pytorch-image-models/models/ensemble-adversarial/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3374ms | 1739ms | +| [ESE-VoVNet 19-dw](https://rwightman.github.io/pytorch-image-models/models/ese-vovnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 734ms | 80ms | +| [ESE-VoVNet 39b](https://rwightman.github.io/pytorch-image-models/models/ese-vovnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3765ms | 114ms | +| [FBNet-C](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 334ms | 105ms | +| [FBNetV3-B](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 434ms | 305ms | +| [FBNetV3-D](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 466ms | 259ms | +| [FBNetV3-G](https://rwightman.github.io/pytorch-image-models/models/fbnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 893ms | 570ms | +| Global Context Resnet50t (gcresnet50t) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1708ms | 165ms | +| GPU-Efficient ResNet Large (gernet_l) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1737ms | 35ms | +| GPU-Efficient ResNet Middle (gernet_m) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1493ms | 27ms | +| GPU-Efficient ResNet Small (gernet_s) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 353ms | 13ms | +| GhostNet-1.0x | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 180ms | 87ms | +| [(Gluon) ResNet101 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2745ms | 58ms | +| [(Gluon) ResNet101 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2847ms | 58ms | +| [(Gluon) ResNet101 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2836ms | 88ms | +| [(Gluon) ResNet101 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3163ms | 62ms | +| [(Gluon) ResNet152 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3930ms | 78ms | +| [(Gluon) ResNet152 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3991ms | 78ms | +| [(Gluon) ResNet152 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3996ms | 110ms | +| [(Gluon) ResNet152 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4312ms | 82ms | +| [(Gluon) ResNet18 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 497ms | 18ms | +| [(Gluon) ResNet34 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 873ms | 27ms | +| [(Gluon) ResNet50 v1b](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1638ms | 38ms | +| [(Gluon) ResNet50 v1c](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1727ms | 38ms | +| [(Gluon) ResNet50 v1d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1720ms | 70ms | +| [(Gluon) ResNet50 v1s](https://rwightman.github.io/pytorch-image-models/models/gloun-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2036ms | 42ms | +| [(Gluon) ResNeXt101 32x4d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3667ms | 927ms | +| [(Gluon) ResNeXt101 64x4d](https://rwightman.github.io/pytorch-image-models/models/gloun-resnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 7244ms | 1703ms | +| [(Gluon) SENet154](https://rwightman.github.io/pytorch-image-models/models/gloun-senet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 9955ms | 1836ms | +| [(Gluon) SE-ResNeXt101 32-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3776ms | 1142ms | +| [(Gluon) SE-ResNeXt101 64-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 7261ms | 1917ms | +| [(Gluon) SE-ResNeXt50 32-4d](https://rwightman.github.io/pytorch-image-models/models/gloun-seresnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2086ms | 628ms | +| [(Gluon) Xception65](https://rwightman.github.io/pytorch-image-models/models/gloun-xception/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4374ms | 140ms | +| HardcoreNAS_A | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 216ms | 138ms | +| HardcoreNAS_B | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 233ms | 128ms | +| HardcoreNAS_C | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 245ms | 150ms | +| HardcoreNAS_D | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 269ms | 153ms | +| HardcoreNAS_E | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 314ms | 188ms | +| HardcoreNAS_F | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 310ms | 186ms | +| [HRNet w18](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2203ms | 60ms | +| [HRNet w18 small](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 868ms | 24ms | +| [HRNet w18 small V2](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1367ms | 38ms | +| [HRNet w30](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3551ms | 78ms | +| [HRNet w32](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4604ms | 75ms | +| [HRNet w40](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 5731ms | 104ms | +| [HRNet w44](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 7707ms | 116ms | +| [HRNet w48](https://rwightman.github.io/pytorch-image-models/models/hrnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 7854ms | 123ms | +| [Instagram ResNeXt101 32x8 WSL](https://rwightman.github.io/pytorch-image-models/models/ig-resnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 8149ms | 2938ms | +| [Inception ResNet v2](https://rwightman.github.io/pytorch-image-models/models/inception-resnet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3358ms | 1739ms | +| PP-LCNet-0.5x | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 48ms | 42ms | +| PP-LCNet-0.75x | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 97ms | 66ms | +| PP-LCNet-1x | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 136ms | 82ms | +| [(Legacy) SENet-154](https://rwightman.github.io/pytorch-image-models/models/legacy-senet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 9974ms | 1857ms | +| [(Legacy) SE-ResNet-152](https://rwightman.github.io/pytorch-image-models/models/legacy-se-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3766ms | 587ms | +| [(Legacy) SE-ResNet-18](https://rwightman.github.io/pytorch-image-models/models/legacy-se-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 488ms | 66ms | +| [(Legacy) SE-ResNet-34](https://rwightman.github.io/pytorch-image-models/models/legacy-se-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 880ms | 100ms | +| [(Legacy) SE-ResNet-50](https://rwightman.github.io/pytorch-image-models/models/legacy-se-resnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1392ms | 248ms | +| [(Legacy) SE-ResNeXt-26](https://rwightman.github.io/pytorch-image-models/models/legacy-se-resnext/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1209ms | 355ms | +| [MnasNet-B1 depth multiplier 1.0](https://rwightman.github.io/pytorch-image-models/models/mnasnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 236ms | 64ms | +| [MnasNet-Small depth multiplier 1.0](https://rwightman.github.io/pytorch-image-models/models/mnasnet/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 78ms | 33ms | +| [MobileNet V2 with channel multiplier of 0.5](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 107ms | 16ms | +| [MobileNet V2 with channel multiplier of 1.0](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 226ms | 21ms | +| [MobileNet V2 with channel multiplier of 1.1](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 358ms | 27ms | +| [MobileNet V2 with channel multiplier of 1.2](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 527ms | 34ms | +| [MobileNet V2 with channel multiplier of 1.4](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v2/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 474ms | 29ms | +| [MobileNet V3 Large 1.0](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 196ms | 92ms | +| [MobileNet V3 Large 1.0, 21k pretraining](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 221ms | 101ms | +| [MobileNet V3 (RW variant)](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 208ms | 92ms | +| [MobileNet V3 Small 0.5](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 33ms | 31ms | +| [MobileNet V3 Small 0.75](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 50ms | 39ms | +| [MobileNet V3 Small 1.0](https://rwightman.github.io/pytorch-image-models/models/mobilenet-v3/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 62ms | 48ms | +| [RegNetX 200MF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 192ms | 67ms | +| [RegNetX 400MF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 294ms | 171ms | +| [RegNetX 600MF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 426ms | 287ms | +| [RegNetX 800MF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 514ms | 277ms | +| [RegNetX 1.6GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1040ms | 657ms | +| [RegNetX 3.2GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2412ms | 1838ms | +| [RegNetX 4.0GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2847ms | 1692ms | +| [RegNetX 6.4GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4990ms | 2919ms | +| [RegNetX 8.0GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 5974ms | 4696ms | +| [RegNetX 16GF](https://rwightman.github.io/pytorch-image-models/models/regnetx/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 13048ms | 4696ms | +| [RegNetY 200MF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 204ms | 71ms | +| [RegNetY 400MF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 306ms | 138ms | +| [RegNetY 600MF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 506ms | 240ms | +| [RegNetY 800MF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 577ms | 292ms | +| [RegNetY 1.6GF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 1086ms | 734ms | +| [RegNetY 4.0GF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3272ms | 2556ms | +| [RegNetY 8.0GF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 3272ms | 2556ms | +| [RegNetY 16GF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 12655ms | 8141ms | +| [RegNetY 32GF](https://rwightman.github.io/pytorch-image-models/models/regnety/) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 24226ms | 17895ms | +| [RepVGG-A2](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 2356ms | 79ms | +| [RepVGG-B0](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 970ms | 68ms | +| [RepVGG-B1](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4059ms | 115ms | +| [RepVGG-B1g4](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 4025ms | 2386ms | +| [RepVGG-B2](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 10556ms | 155ms | +| [RepVGG-B2g4](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 8199ms | 3683ms | +| [RepVGG-B3](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 12048ms | 189ms | +| [RepVGG-B3g4](https://rwightman.github.io/pytorch-image-models/models/#repvgg-byobnetpy) | Classification | [pytorch-image-models](https://rwightman.github.io/pytorch-image-models) | 10102ms | 5250ms | --- -[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. +[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. \ No newline at end of file diff --git a/how-to/README.md b/how-to/README.md index 1b64195..bc95753 100644 --- a/how-to/README.md +++ b/how-to/README.md @@ -21,21 +21,21 @@ This directory contains the solution to specific problems related to DRP-AI TVM[ - - + + Face Detection - Hand Landmark Localization - Face Expression Recognition + Hand Landmark Localization + Emotion Recognition - + - Classification + Classification Semantic Segmentation Age Classification diff --git a/how-to/img/2d_hand_estimation.png b/how-to/img/2d_hand_estimation.png new file mode 100755 index 0000000..be8a742 Binary files /dev/null and b/how-to/img/2d_hand_estimation.png differ diff --git a/how-to/img/2d_hand_estimation_dev.png b/how-to/img/2d_hand_estimation_dev.png deleted file mode 100644 index 9bd861c..0000000 Binary files a/how-to/img/2d_hand_estimation_dev.png and /dev/null differ diff --git a/how-to/img/3d_pose_estimation_dev.png b/how-to/img/3d_pose_estimation_dev.png deleted file mode 100644 index 70d03d4..0000000 Binary files a/how-to/img/3d_pose_estimation_dev.png and /dev/null differ diff --git a/how-to/img/classification.png b/how-to/img/classification.png new file mode 100755 index 0000000..861fc35 Binary files /dev/null and b/how-to/img/classification.png differ diff --git a/how-to/img/classification_dev.png b/how-to/img/classification_dev.png deleted file mode 100644 index 3a6ebce..0000000 Binary files a/how-to/img/classification_dev.png and /dev/null differ diff --git a/how-to/img/emotion.png b/how-to/img/emotion.png new file mode 100755 index 0000000..533e62d Binary files /dev/null and b/how-to/img/emotion.png differ diff --git a/how-to/img/face_recognition_dev.png b/how-to/img/face_recognition_dev.png deleted file mode 100644 index 22c511a..0000000 Binary files a/how-to/img/face_recognition_dev.png and /dev/null differ diff --git a/how-to/sample_app/README.md b/how-to/sample_app/README.md index ab37a84..5ba8669 100644 --- a/how-to/sample_app/README.md +++ b/how-to/sample_app/README.md @@ -3,7 +3,7 @@ ## Overview This page explains about the sample application for DRP-AI TVM[^1] that uses USB Camera as an input and transfer the result via HTTP to display on HTML. Sample application code and its execution environment are provided in this directory. -This application is for RZ/V2MA Evaluation Board Kit. +This application is for **RZ/V2MA Evaluation Board Kit**. @@ -44,19 +44,14 @@ In `src` directory, followings are provided. ### Execution Environment In `exe` directory, following files are provided as execution environment to be placed on target board. +**Note that Model Object files (DRP-AI TVM[^1] compile result) are not provided.** | File/Directory | Details | |:---|:---| -|face_deeppose_pt/ | DeepPose model for DRP-AI mode. | -|face_deeppose_cpu/ | DeepPose model for CPU mode. | -|yolov3_onnx/ | YOLOv3 model for DRP-AI mode. | -|yolov2_onnx/ | YOLOv2 model for DRP-AI mode. | -|tinyyolov3_onnx/ | Tiny YOLOv3 model for DRP-AI mode. | -|tinyyolov2_onnx/ | Tiny YOLOv2 model for DRP-AI mode. | -|hrnet_onnx/ | HRNet model for DRP-AI mode. | |preprocess_tvm_v2ma/ | Pre-processing Runtime Object files. | |sample_app_drpai_tvm_usbcam_http | Application itself. | - +|coco-labels-2014_2017.txt | Label list for Object Detection. | +|synset_words_imagenet.txt | Label list for Classification. | In `etc` directory, following files are provided as execution environment to be placed on client PC that displays HTTP result. @@ -79,8 +74,64 @@ Please refer to [Application Example](../../apps/README.md#how-to-compile-the-ap Please make sure to change the SDK path to the one generated in 1. ## Run the application +Before running the application, please compile the AI models to generate following directories according to the instruction provided in each "How to create Model Object" column. +Copy the generated directories to `exe` directory above so that Model Objet directories are placed in the same directory as `sample_app_drpai_tvm_usbcam_http` application. + +| File/Directory | Details | How to create Model Object | +|:---|:---|:---| +|face_deeppose_pt/ | DeepPose model for DRP-AI mode. |[Face Landmark Localization](docs/face_landmark_localization/deeppose) | +|face_deeppose_cpu/ | DeepPose model for CPU mode. |[Face Landmark Localization](docs/face_landmark_localization/deeppose) | +|yolov3_onnx/ | YOLOv3 model for DRP-AI mode. |[Object Detection](docs/object_detection/yolo) | +|yolov2_onnx/ | YOLOv2 model for DRP-AI mode. |[Object Detection](docs/object_detection/yolo) | +|tinyyolov3_onnx/ | Tiny YOLOv3 model for DRP-AI mode. |[Object Detection](docs/object_detection/yolo) | +|tinyyolov2_onnx/ | Tiny YOLOv2 model for DRP-AI mode. |[Object Detection](docs/object_detection/yolo) | +|ultraface_onnx/ | UltraFace model for DRP-AI mode.|[Face Detection](docs/face_detection/ultraface) | +|hrnet_onnx/ | HRNet model for DRP-AI mode.|[Human Pose Estimation](docs/human_pose_estimation/hrnet) | +|hrnetv2_pt/ | HRNetv2 model for DRP-AI mode. |[Hand Landmark Localization](docs/hand_landmark_localization/hrnetv2) | +|emotion_fp_onnx/ | Emotion FERPlus model for DRP-AI mode. |[Emotion Recognition](docs/emotion_recognition/emotion_ferplus) | +|googlenet_onnx/ | GoogleNet model for DRP-AI mode. |[Classification](docs/classification/googlenet) | + +Filesystem on the board should look like below. +```sh +/ +├── usr/ +│ └── lib64/ +│ └── libtvm_runtime.so +└── home/ + └── root/ + └── exe/ + ├── face_deeppose_pt/ + │ ├── deploy.json + │ ├── deploy.params + │ └── deploy.so + ├── face_deeppose_cpu/ + │ ... Other Model Object directories ... + │ + ├── preprocess_tvm_v2ma/ + ├── coco-labels-2014_2017.txt + ├── synset_words_imagenet.txt + └── sample_app_drpai_tvm_usbcam_http +``` + To run the application, please refer to the instruction for USB Camera HTTP version application in RZ/V2MA DRP-AI Sample Application Note provided in RZ/V2MA DRP-AI Support Package. +## Note +When the web browser is closed to terminate the application following error may appear on the console. +```sh +[ERROR] Failed to enqueue _capture buffer. +Send application message to web client.[Failed to enqueue _capture buffer. +Restart the application.] + +... + +********************** END ********************* +***** Stop Recognize. ***** +<<<<<<<<<<<<<<<<<<<<< Message Thread Terminated >>>>>>>>>>>>>>>>>> +All Finish +``` +Since the application has already been terminated, the error does not affect the application. +Application can be restarted without any special procedures, i.e. reboot the board. + ## Application Specification ### Model Information Please refer to [AI Sample Application](../README.md#ai-sample-application) for model information. diff --git a/how-to/sample_app/docs/classification/googlenet/README.md b/how-to/sample_app/docs/classification/googlenet/README.md new file mode 100755 index 0000000..2b4f031 --- /dev/null +++ b/how-to/sample_app/docs/classification/googlenet/README.md @@ -0,0 +1,63 @@ +# Classification + +### Model: [GoogleNet](#model-information) +Sample application code and its execution environment are provided in **[here](../../../../sample_app)**. + +## Overview +This page explains about Classification in the [sample application](../../../../sample_app) for DRP-AI TVM[^1]. + + + +## Model Information +- GoogleNet: [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet) googlenet-9.onnx +Dataset: [ILSVRC2014](https://image-net.org/challenges/LSVRC/2014/) +Input size: 1x3x224x224 +Output size: 1x1000 + +### How to compile the model +To run the Classification, `googlenet_onnx` Model Object is required. +Follow the instuction below to prepare the Model Object. + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Download the onnx file from [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet). +3. Place the onnx file in `$TVM_HOME/../tutorials`. +4. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x438E0000`. +5. Run the with the command below. +```sh +$ python3 compile_onnx_model.py \ +-i data_0 \ +-s 1,3,224,224 \ +-o googlenet_onnx \ +googlenet-9.onnx +``` +6. Confirm that `googlenet_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +7. Before running the application, make sure to copy the `googlenet_onnx` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + + +## Processing Details +### DRP-AI mode +- Source Code: [tvm_drpai_googlenet.cpp](../../../src/recognize/googlenet/tvm_drpai_googlenet.cpp) + +Followings are processing details if user selected "GoogleNet (DRP-AI)". + +#### Pre-processing +Pre-processing is done by DRP-AI Pre-processing Runtime and CPU. + +| Function | Details | +|:---|:---| +|conv_yuv2rgb |Convert YUY2 to RGB processed by DRP-AI Pre-processing Runtime.| +|resize |Resize to 224x224 processed by DRP-AI Pre-processing Runtime.| +|cast_to_fp16 | Cast data to FP16 for DRP-AI processed by DRP-AI Pre-processing Runtime.| +|normalize | Normalize pixel values with mean values of {123.68, 116.779, 103.939}
processed by DRP-AI Pre-processing Runtime.| +|transpose | Transpose HWC to CHW order processed by DRP-AI Pre-processing Runtime. | +|cast_fp16_fp32 | Cast FP16 data to FP32 for DRP-AI TVM[^1] input
processed by DRP-AI Pre-processing Runtime.| +|rgb2bgr | Convert RGB to BGR processed by CPU.| + +#### Inference +The Object files `googlenet_onnx` is generated from ONNX Model Zoo GoogleNet pre-trained model as described in [Model Information](#model-information). + +#### Post-processing +Post-processing is processed by CPU. + +--- +[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. diff --git a/how-to/sample_app/docs/classification/googlenet/img/googlenet.jpg b/how-to/sample_app/docs/classification/googlenet/img/googlenet.jpg new file mode 100755 index 0000000..b4f5cb6 Binary files /dev/null and b/how-to/sample_app/docs/classification/googlenet/img/googlenet.jpg differ diff --git a/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/README.md b/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/README.md new file mode 100755 index 0000000..370e8c8 --- /dev/null +++ b/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/README.md @@ -0,0 +1,85 @@ +# Emotion Recognition + +### Model: [Emotion FERPlus](#model-information) +Sample application code and its execution environment are provided in **[here](../../../../sample_app)**. + +## Overview +This page explains about Emotion Recognition in the [sample application](../../../../sample_app) for DRP-AI TVM[^1]. + + + +## Model Information +- Emotion FERPlus: [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) emotion-ferplus-8.onnx +Dataset: See [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus#dataset). +Input size: 1x1x64x64 +Output size: 1x8 + +Emotion FERPlus can only classify the face expression of single person. +To enable multiple face emotion recognition, this application used [UltraFace](../../../docs/face_detection/ultraface/) as pre-processing. +To see more details on UltraFace, please see [Face Detection](../../../docs/face_detection/ultraface/). + + +### How to compile the model +To run the Emotion Recognition, `emotion_fp_onnx` Model Object and `ultraface_onnx` Model Object are required. +Follow the instuction below to prepare the `emotion_fp_onnx` Model Object. +For `ultraface_onnx` Model Object, please refer to [Face Detection](../../../docs/face_detection/ultraface/). + + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Download the onnx file from [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus). +3. Place the onnx file in `$TVM_HOME/../tutorials`. +4. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x442d0000`. +Note that the value **must NOT** be default value `0x438E0000` in order to avoid conflict with UltraFace Model Object. +5. Run the with the command below. +```sh +$ python3 compile_onnx_model.py \ +-i Input3 \ +-s 1,1,64,64 \ +-o emotion_fp_onnx \ +emotion-ferplus-8.onnx +``` +6. Confirm that `emotion_fp_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +7. Before running the application, make sure to copy the `emotion_fp_onnx` directory and `ultraface_onnx` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + + +## Processing Details +### DRP-AI mode +- Source Code: [tvm_drpai_emotionfp.cpp](../../../src/recognize/emotionfp/tvm_drpai_emotionfp.cpp) + +Followings are processing details if user selected "Emotion FERPlus (DRP-AI)". + +#### Pre-processing +As a pre-processing, Face Detection model, UltraFace, is used. +To see details, please refer to [Face Detection Processing Details](../../../docs/face_detection/ultraface/README.md#processing-details). + +For each face detected, following pre-processing is done by CPU.. +Note that some of them are processed by C++ OpenCV. + +| Function | Details | +|:---|:---| +|Crop | Crop YUYV image. Processed by CPU. | +|cvtColor | C++ OpenCV. Convert YUY2 to Grayscale.| +|resize |C++ OpenCV. Resize to 64x64.| +|transpose |Transpose HWC to CHW order. Processed by CPU.| + +#### Inference +The Object files `emotion_fp_onnx` is generated from ONNX Model Zoo Emotion FERPlus pre-trained model as described in [Model Information](#model-information). + +#### Post-processing +Post-processing is processed by CPU. + + +#### About processing time +Details of processing time, which is displayed on web browser, are as follow. + +| Processing | Details | +|:---|:---| +|Pre-processing | Sum of time taken for following operations.
- Face Detection pre-processing, inference and postprocessing
- Emotion recognition pre-processing for all detected faces. | +|Inferene | Time taken to run inference for all detected faces.| +|Post-processing |Time taken to run post-processing for all detected faces.| + +For example, if there are two bounding box detected in face detection, emotion recognition will be carried out for two times. +Therefore, inference time will be approximately two times by single inference processing time and same applies for other processing time. + +--- +[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. diff --git a/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/img/emotionfp.jpg b/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/img/emotionfp.jpg new file mode 100755 index 0000000..5afcd47 Binary files /dev/null and b/how-to/sample_app/docs/emotion_recognition/emotion_ferplus/img/emotionfp.jpg differ diff --git a/how-to/sample_app/docs/face_detection/ultraface/README.md b/how-to/sample_app/docs/face_detection/ultraface/README.md index 3497dfc..022be8f 100644 --- a/how-to/sample_app/docs/face_detection/ultraface/README.md +++ b/how-to/sample_app/docs/face_detection/ultraface/README.md @@ -14,6 +14,27 @@ Dataset: See [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/bo Input size: 1x3x240x320 Output size: 1x4420x2, 1x4420x4 + +### How to compile the model +To run the Face Detection, `ultraface_onnx` Model Object is required. +Follow the instuction below to prepare the `ultraface_onnx` Model Object. + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Download the onnx file from [ONNX Model Zoo](https://github.com/onnx/models/tree/main/vision/body_analysis/ultraface). +3. Place the onnx file in `$TVM_HOME/../tutorials`. +4. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x438E0000`. +5. Run the with the command below. +```sh +$ python3 compile_onnx_model.py \ +-i input \ +-s 1,3,240,320 \ +-o ultraface_onnx \ +version-RFB-320.onnx +``` +6. Confirm that `ultraface_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +7. Before running the application, make sure to copy the `ultraface_onnx` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + + ## Processing Details ### DRP-AI mode - Source Code: [tvm_drpai_ultraface.cpp](../../../src/recognize/ultraface/tvm_drpai_ultraface.cpp) @@ -34,7 +55,6 @@ Pre-processing is done by DRP-AI Pre-processing Runtime, which allows following #### Inference The Object files `ultraface_onnx` is generated from ONNX Model Zoo pre-trained model as described in [Model Information](#model-information). -Please refer to [Compile Tutorial](../../../../../tutorials) for more details on compiling model. #### Post-processing Post-processing is processed by CPU. diff --git a/how-to/sample_app/docs/face_detection/ultraface/img/ultraface.jpg b/how-to/sample_app/docs/face_detection/ultraface/img/ultraface.jpg index 170ba95..4a41d53 100644 Binary files a/how-to/sample_app/docs/face_detection/ultraface/img/ultraface.jpg and b/how-to/sample_app/docs/face_detection/ultraface/img/ultraface.jpg differ diff --git a/how-to/sample_app/docs/face_landmark_localization/deeppose/README.md b/how-to/sample_app/docs/face_landmark_localization/deeppose/README.md index fedc1dc..a369ae8 100644 --- a/how-to/sample_app/docs/face_landmark_localization/deeppose/README.md +++ b/how-to/sample_app/docs/face_landmark_localization/deeppose/README.md @@ -14,6 +14,140 @@ Dataset: [WFLW](https://wywu.github.io/projects/LAB/WFLW.html) Input size: 1x3x256x256 Output size: 1x98x2 + +### How to compile the model +To run the Face Landmark Localization, `face_deeppose_pt` Model Object is required for DRP-AI mode and `face_deeppose_cpu` is required for CPU mode. +#### Operating Environment +- mmcv-full v1.6.1 +- MMPose v0.28.1 + +#### 1. Save the AI model from MMPose +Follow the instuction below to prepare the DeepPose model. + +1. Prepare the save script as below in the MMPose clone directory. +```py +import numpy as np + +# PyTorch imports +import torch +import torchvision + +from mmpose.apis import init_pose_model + +def _convert_batchnorm(module): + """Convert the syncBNs into normal BN3ds.""" + module_output = module + if isinstance(module, torch.nn.SyncBatchNorm): + module_output = torch.nn.BatchNorm3d(module.num_features, module.eps, + module.momentum, module.affine, + module.track_running_stats) + if module.affine: + module_output.weight.data = module.weight.data.clone().detach() + module_output.bias.data = module.bias.data.clone().detach() + # keep requires_grad unchanged + module_output.weight.requires_grad = module.weight.requires_grad + module_output.bias.requires_grad = module.bias.requires_grad + module_output.running_mean = module.running_mean + module_output.running_var = module.running_var + module_output.num_batches_tracked = module.num_batches_tracked + for name, child in module.named_children(): + module_output.add_module(name, _convert_batchnorm(child)) + del module + return module_output + +config_file = 'configs/face/2d_kpt_sview_rgb_img/deeppose/wflw/res50_wflw_256x256.py' +checkpoint_file = 'deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth' +model = init_pose_model(config_file, checkpoint_file, device='cpu') +model = _convert_batchnorm(model) +model = model.eval() + +# implement the forward method +if hasattr(model, 'forward_dummy'): + model.forward = model.forward_dummy + +# We grab the TorchScripted model via tracing +input_shape = [1, 3, 256, 256] +input_data = torch.randn(input_shape) +scripted_model = torch.jit.trace(model, input_data).eval() + +scripted_model.save('deeppose.pt')# Save +print("Torch model saved to ./deeppose.pt") +``` +2. Download the checkpoint file(`.pth`) from [the mmpose website](https://mmpose.readthedocs.io/en/latest/topics/face.html#deeppose-resnet-on-wflw) and place them in the same directory as the save script above. +3. Run the save script and confirm that `deeppose.pt` is generated. + +#### 2. Compile pytorch model for DRP-AI mode +Follow the instuction below to prepare the `face_deeppose_pt` Model Object. + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Place the `deeppose.pt` file in `$TVM_HOME/../tutorials`. +3. Change the `addr_map_start` setting in `compile_pytorch_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x438E0000`. +4. Run the script with the command below. +```sh +$ python3 compile_pytorch_model.py \ +-s 1,3,256,256 \ +-o face_deeppose_pt \ +deeppose.pt +``` +5. Confirm that `face_deeppose_pt` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +6. Before running the application, make sure to copy the `face_deeppose_pt` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + + +#### 3. Compile pytorch model for CPU mode +Follow the instuction below to prepare the `face_deeppose_cpu` Model Object. + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Place the `deeppose.pt` file in `$TVM_HOME/../tutorials`. +3. Copy and rename the script `compile_cpu_only_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) as shown below. +```sh +$ cp compile_cpu_only_onnx_model.py compile_cpu_only_pytorch_model.py +``` +4. Change the `compile_cpu_only_pytorch_model.py` script as shown below. +**Note that this script is only for DeepPose CPU mode and not guaranteed for other models.** + +Before +```py +#L23 +import onnx + +#L59~66 + # 2. Load onnx model and set input shape. + shape_dict = {input_name: input_shape} + # 2.1 Load onnx model + onnx_model = onnx.load_model(model_file) + # 2.2 Set input data information + + # 3.1 Run TVM Frontend + mod, params = tvm.relay.frontend.from_onnx(onnx_model, shape_dict) +``` +After +```py +#L23 +import torch + +#L59~68 + # 2. Load model and set input shape. + # 2.1 Load model + model = torch.jit.load(model_file) + model.eval() + # 2.2 Set input data information + input_name = "input0" + shape_list = [(input_name, opts["input_shape"])] + + # 3.1 Run TVM Frontend + mod, params = tvm.relay.frontend.from_pytorch(model, shape_list) +``` +5. Run the script with the command below. +```sh +$ python3 compile_cpu_only_pytorch_model.py \ +-s 1,3,256,256 \ +-o face_deeppose_cpu \ +deeppose.pt +``` +6. Confirm that `face_deeppose_cpu` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +7. Before running the application, make sure to copy the `face_deeppose_cpu` directory into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + + ## Processing Details ### DRP-AI mode - Source Code: [tvm_drpai_deeppose.cpp](../../../src/recognize/deeppose/tvm_drpai_deeppose.cpp) @@ -34,7 +168,6 @@ Pre-processing is done by DRP-AI Pre-processing Runtime, which allows following #### Inference The Object files `face_deeppose_pt` is generated from MMPose DeepPose pre-trained model as described in [Model Information](#model-information). -Please refer to [Compile Tutorial](../../../../../tutorials) for more details on compiling model. #### Post-processing Post-processing is processed by CPU. @@ -57,7 +190,6 @@ Note that some of them are processed by C++ OpenCV. #### Inference The Object files `face_deeppose_cpu` provided in this directory is generated from MMPose DeepPose pre-trained model as described in [Model Information](#model-information) using CPU-only deploy mode. -Please refer to [Compile Tutorial CPU-only deploy mode](../../../../../tutorials/README.md#3-compile-using-cpu-only-deploy-mode) for more details on compiling model. #### Post-processing Post-processing is processed by CPU. diff --git a/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/README.md b/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/README.md new file mode 100755 index 0000000..a9ada74 --- /dev/null +++ b/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/README.md @@ -0,0 +1,125 @@ +# Hand Landmark Localization + +### Model: [HRNet(High-Resolution Network) v2](#model-information) +Sample application code are provided in **[here](../../../../sample_app)**. + +## Overview +This page explains about Hand Landmark Localization in the [sample application](../../../../sample_app) for DRP-AI TVM[^1]. + + + +## Model Information +- HRNetv2: [MMPose Topdown Heatmap + Hrnetv2 + Coco + Wholebody on Coco_wholebody_hand](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#topdown-heatmap-hrnetv2-coco-wholebody-on-coco-wholebody-hand) +Dataset: [COCO](https://cocodataset.org/#home) +Input size: 1x3x256x256 +Output size: 1x21x64x64 + +### How to compile the model +#### Operating Environment +- mmcv-full v1.6.1 +- MMPose v0.28.1 + +#### Save the AI model from MMPose +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Clone mmpose repository. + +```sh +git clone -b v0.28.1 https://github.com/open-mmlab/mmpose.git +``` + +3. Download the checkpoint file from [the mmpose website](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#topdown-heatmap-hrnetv2-coco-wholebody-on-coco-wholebody-hand) and place it in the MMPose clone directory. + +4. Save the script below in this directory and run it. + +```py +import numpy as np + +# PyTorch imports +import torch +import torchvision + +from mmpose.apis import init_pose_model + +def _convert_batchnorm(module): + """Convert the syncBNs into normal BN3ds.""" + module_output = module + if isinstance(module, torch.nn.SyncBatchNorm): + module_output = torch.nn.BatchNorm3d(module.num_features, module.eps, + module.momentum, module.affine, + module.track_running_stats) + if module.affine: + module_output.weight.data = module.weight.data.clone().detach() + module_output.bias.data = module.bias.data.clone().detach() + # keep requires_grad unchanged + module_output.weight.requires_grad = module.weight.requires_grad + module_output.bias.requires_grad = module.bias.requires_grad + module_output.running_mean = module.running_mean + module_output.running_var = module.running_var + module_output.num_batches_tracked = module.num_batches_tracked + for name, child in module.named_children(): + module_output.add_module(name, _convert_batchnorm(child)) + del module + return module_output + +config_file = 'configs/hand/2d_kpt_sview_rgb_img/topdown_heatmap/coco_wholebody_hand/hrnetv2_w18_coco_wholebody_hand_256x256.py' +checkpoint_file = 'hrnetv2_w18_coco_wholebody_hand_256x256-1c028db7_20210908.pth' +model = init_pose_model(config_file, checkpoint_file, device='cpu') +model = _convert_batchnorm(model) +model = model.eval() + +# implement the forward method +if hasattr(model, 'forward_dummy'): + model.forward = model.forward_dummy + +# We grab the TorchScripted model via tracing +input_shape = [1, 3, 256, 256] +input_data = torch.randn(input_shape) +scripted_model = torch.jit.trace(model, input_data).eval() + +scripted_model.save('hrnetv2.pt') # Save +print("Torch model saved to ./hrnetv2.pt") +``` +5. Copy the hrnetv2.pt to the `drp-ai_tvm/tutorials` directory and run the sample script [compile_pytorch_model.py](../../../../../tutorials/compile_pytorch_model.py) as shown below. +Make sure to change the `addr_map_start` setting in `compile_pytorch_model.py` to `0x438E0000`. +The output directory name `hrnetv2_pt` is used in the sample application and should not be changed. +Confirm that three files, deploy.so, deploy.params, and deploy.json, have been created in the `hrnetv2_pt` directory. + +```sh +# Run DRP-AI TVM[*1] Compiler script +python3 compile_pytorch_model.py \ + ./hrnetv2.pt \ + -o hrnetv2_pt \ + -s 1,3,256,256 +``` + +6. Copy the `hrnetv2_pt` directory into the execution environment directory where the compiled sample application sample_app_drpai_tvm_usbcam_http is located. + +## Processing Details + +### DRP-AI mode +- Source Code: [tvm_drpai_hrnet.cpp](../../../src/recognize/hrnet/tvm_drpai_hrnet.cpp). + +The source code is common to that of the HRNet, except for the macro definitions. See [the header file](../../../src/recognize/hrnet/tvm_drpai_hrnet.h). +Followings are processing details if user selected "HRNetv2 (DRP-AI)". + +#### Pre-processing +First, crop process is executed by CPU. Then pre-processing is done by DRP-AI Pre-processing Runtime, which allows following pre-processing on DRP-AI. + +| Function | Details | +|:---|:---| +|crop |Crop image processed by CPU.| +|conv_yuv2rgb |Convert YUY2 to RGB processed by DRP-AI Pre-processing Runtime.| +|resize |Resize to 256x256 processed by DRP-AI Pre-processing Runtime.| +|cast_to_fp16 | Cast data to FP16 for DRP-AI processed by DRP-AI Pre-processing Runtime.| +|normalize | Normalize pixel values with mean and standard deviation processed by DRP-AI Pre-processing Runtime.| +|transpose | Transpose HWC to CHW order processed by DRP-AI Pre-processing Runtime. | +|cast_fp16_fp32 | Cast FP16 data to FP32 for DRP-AI TVM[^1] input processed by DRP-AI Pre-processing Runtime.| + +#### Inference +Inference performs the processing of `hrnetv2_pt` generated in [Model Information](#model-information). + +#### Post-processing +Post-processing is processed by CPU. + +--- +[^1]: DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. diff --git a/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/img/hrnetv2.jpg b/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/img/hrnetv2.jpg new file mode 100755 index 0000000..c40e6c1 Binary files /dev/null and b/how-to/sample_app/docs/hand_landmark_localization/hrnetv2/img/hrnetv2.jpg differ diff --git a/how-to/sample_app/docs/human_pose_estimation/hrnet/README.md b/how-to/sample_app/docs/human_pose_estimation/hrnet/README.md index f8322f7..6da414a 100644 --- a/how-to/sample_app/docs/human_pose_estimation/hrnet/README.md +++ b/how-to/sample_app/docs/human_pose_estimation/hrnet/README.md @@ -13,9 +13,33 @@ This page explains about Human Pose Estimation in the [sample application](../.. Dataset: [COCO](https://cocodataset.org/#home) Input size: 1x3x256x192 Output size: 1x17x64x48 - -### ONNX format model - Here, we follow this [tutorial](https://mmpose.readthedocs.io/en/latest/tutorials/5_export_model.html#prerequisite) to convert HRNet model provided by MMPose into ONNX format. The MMpose version we checked is v0.26.0. + +### How to compile the model +To run the Human Pose Estimation, `hrnet_onnx` Model Object is required. + +#### Operating Environment +- mmcv-full v1.5.1 +- MMPose v0.26.0 + +#### Compile onnx model +Follow the instuction below to prepare the HRNet Model Object. + +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Follow the [MMPose tutorial](https://mmpose.readthedocs.io/en/latest/tutorials/5_export_model.html#prerequisite) to convert HRNet model provided by MMPose into ONNX format. +3. Next, run the sample script [compile_onnx_model.py](../../../../../tutorials/compile_onnx_model.py) as shown below. +Make sure to change the `addr_map_start` setting in `compile_onnx_model.py` to `0x438E0000`. +The output directory name `hrnet_onnx` is used in the sample application and should not be changed. + +```sh +# Run DRP-AI TVM[*1] Compiler script +$ python3 compile_onnx_model.py \ +./hrnet.onnx \ +-o hrnet_onnx \ +-s 1,3,256,192 \ +-i input.1 +``` +4. Confirm that three files, deploy.so, deploy.params, and deploy.json, have been created in the `hrnet_onnx` directory. +5. Copy the `hrnet_onnx` directory into the execution environment directory where the compiled sample application sample_app_drpai_tvm_usbcam_http is located. ## Processing Details ### DRP-AI mode @@ -38,7 +62,6 @@ First, crop process is executed by CPU. Then pre-processing is done by DRP-AI Pr #### Inference The Object files `hrnet_onnx` are generated from HRNet pre-trained model provided by MMPose framework as described in [Model Information](#model-information). -Please refer to [Compile Tutorial](../../../../../tutorials) for more details on compiling model. #### Post-processing Post-processing is processed by CPU. diff --git a/how-to/sample_app/docs/object_detection/yolo/README.md b/how-to/sample_app/docs/object_detection/yolo/README.md index 79136fd..1e5c471 100644 --- a/how-to/sample_app/docs/object_detection/yolo/README.md +++ b/how-to/sample_app/docs/object_detection/yolo/README.md @@ -21,13 +21,18 @@ Dataset: [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html) Input size: 1x3x416x416 Output size: 1x13x13x125 -### ONNX format model + +### How to compile the model +To run the Object Detection, `yolov3_onnx`, `yolov2_onnx`, `tinyyolov3_onnx` and `tinyyolov2_onnx` Model Object are required. +Follow the instuction below to prepare the Model Object. + +#### 1. Save ONNX format model Here, we use the ONNX format model converted from Darknet weight file by using the scripts provided in this directory. -#### Requirement +##### 1.1 Requirement To run the script, PyTorch 1.8.0 must be installed. -#### File Configuration +##### 1.2 File Configuration | File | Details | |:---|:---| |darknet_cfg.py |Darknet cfg file parser.| @@ -36,16 +41,16 @@ To run the script, PyTorch 1.8.0 must be installed. |yolo.py | Conversion configuration file.| |yolo.ini | Conversion configuration file parser. | -#### ONNX conversion -Download the YOLO *.cfg file and *.weights file from [Darknet](https://pjreddie.com/darknet/yolo/) and place them in the same directory of the conversion scripts. -Run the following commands to convert YOLOv3 model. +##### 1.3 ONNX conversion +1. Download the YOLO *.cfg file and *.weights file from [Darknet](https://pjreddie.com/darknet/yolo/) and place them in the same directory of the conversion scripts. +2. Run the following commands to convert YOLOv3 model. ```sh $ python3 convert_to_pytorch.py yolov3 # --> yolov3.pth will be generated. $ python3 convert_to_onnx.py yolov3 # --> d-yolov3.onnx will be generated. ``` -If you would like to convert other models, specify the parameter instead of `yolov3` according to the following table. +3. If you would like to convert other models, specify the parameter instead of `yolov3` according to the following table. | Parameter | Model | |:---|:---| @@ -54,6 +59,29 @@ If you would like to convert other models, specify the parameter instead of `yol |tinyyolov3| Tiny YOLOv3| |tinyyolov2 | Tiny YOLOv2 | +#### 2. Compile onnx models +1. Set the environment variables, i.e. `$TVM_HOME` etc., according to [Installation](../../../../../setup/). +2. Place the onnx file in `$TVM_HOME/../tutorials`. +3. Change the `addr_map_start` setting in `compile_onnx_model.py` provided in [Compile Tutorial](../../../../../tutorials) to `0x438E0000`. +4. Run the script with the command below to compile YOLOv3 model. +```sh +$ python3 compile_onnx_model.py \ +-i input1 \ +-s 1,3,416,416 \ +-o yolov3_onnx \ +d-yolov3.onnx +``` +5. Confirm that `yolov3_onnx` directory is generated and it contains `deploy.json`, `deploy.so` and `deploy.params` files. +6. Repeat the step for other models to generate following Model Object. + +| Model | Model Object Name | +|:---|:---| +|d-yolov2.onnx |yolov2_onnx| +|d-tinyyolov3.onnx |tinyyolov3_onnx| +|d-tinyyolov2.onnx| tinyyolov2_onnx| + +7. Before running the application, make sure to copy the `yolov3_onnx`, `yolov2_onnx`, `tinyyolov3_onnx`, and `tinyyolov2_onnx` directories into the execution environment directory `exe` where the compiled sample application `sample_app_drpai_tvm_usbcam_http` is located. + ## Processing Details ### DRP-AI mode @@ -75,7 +103,6 @@ Pre-processing is done by DRP-AI Pre-processing Runtime, which allows following #### Inference The Object files `yolov3_onnx`, `yolov2_onnx`, `tinyyolov3_onnx` and `tinyyolov2_onnx` are generated from Darknet YOLO pre-trained ONNX model as described in [Model Information](#model-information). -Please refer to [Compile Tutorial](../../../../../tutorials) for more details on compiling model. #### Post-processing Post-processing is processed by CPU. diff --git a/how-to/sample_app/etc/Websocket_Client/index.html b/how-to/sample_app/etc/Websocket_Client/index.html index 5ff6325..61d5ef6 100644 --- a/how-to/sample_app/etc/Websocket_Client/index.html +++ b/how-to/sample_app/etc/Websocket_Client/index.html @@ -71,6 +71,9 @@

+ + + @@ -88,7 +91,17 @@

- + diff --git a/how-to/sample_app/etc/Websocket_Client/js/websocket_demo.js b/how-to/sample_app/etc/Websocket_Client/js/websocket_demo.js index 2bdc33a..fc8f8dd 100644 --- a/how-to/sample_app/etc/Websocket_Client/js/websocket_demo.js +++ b/how-to/sample_app/etc/Websocket_Client/js/websocket_demo.js @@ -6,6 +6,18 @@ * see https://opensource.org/licenses/MIT */ +const ID_Unknown = 0; +const ID_DEEPPOSE = 1; +const ID_YOLOV3 = 2; +const ID_TINYYOLOV3 = 3; +const ID_YOLOV2 = 4; +const ID_TINYYOLOV2 = 5; +const ID_ULTRAFACE = 6; +const ID_HRNET = 7; +const ID_HRNETV2 = 8; +const ID_GOOGLENET = 9; +const ID_EMOTIONFP = 10; + // let socket = new WebSocket('ws://localhost:3000/ws/', 'graph-update'); let socket = new WebSocket('ws://192.168.1.11:3000/ws/'); let predCanvas = document.getElementById('pred_canvas'); @@ -28,6 +40,8 @@ predCtx.fillStyle = 'darkgray'; predCtx.fillRect(0, 0, 640, 480); predCtx.font = '12pt Arial'; +let model_id = ID_Unknown; + let startTime = null; let postProcessTime = null; @@ -100,6 +114,8 @@ function inputChange(event) { caution.innerHTML= "Please get close to the camera at around 20cm."; caution.style.color="#FFD700"; aiDescription.innerHTML = "If your face is too far from the camera, the localization may fail."; + aiDescription.style.color =""; + model_id = ID_DEEPPOSE; } else if (event.currentTarget.value == "TVM_DRPAI_YOLOV3") { @@ -107,6 +123,8 @@ function inputChange(event) { caution.innerHTML= "Detects 80 class of objects."; caution.style.color=""; aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_YOLOV3; } else if (event.currentTarget.value == "TVM_DRPAI_TINYYOLOV3") { @@ -114,6 +132,8 @@ function inputChange(event) { caution.innerHTML= "Detect 80 class of objects."; caution.style.color=""; aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_TINYYOLOV3; } else if (event.currentTarget.value == "TVM_DRPAI_YOLOV2") { @@ -121,6 +141,8 @@ function inputChange(event) { caution.innerHTML= "Detect 20 class of objects."; caution.style.color=""; aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_YOLOV2; } else if (event.currentTarget.value == "TVM_DRPAI_TINYYOLOV2") { @@ -128,6 +150,8 @@ function inputChange(event) { caution.innerHTML= "Detect 20 class of objects."; caution.style.color=""; aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_TINYYOLOV2; } else if (event.currentTarget.value == "TVM_DRPAI_ULTRAFACE") { @@ -135,6 +159,8 @@ function inputChange(event) { caution.innerHTML= "Detect Human Faces."; caution.style.color=""; aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_ULTRAFACE; } else if (event.currentTarget.value == "TVM_DRPAI_HRNET") { @@ -142,6 +168,35 @@ function inputChange(event) { caution.innerHTML= "Please adjust the camera so that the whole body appears within the box."; caution.style.color="#FFD700"; aiDescription.innerHTML = "Single person only. It does not support more than one person.
"; + aiDescription.style.color =""; + model_id = ID_HRNET; + } + else if (event.currentTarget.value == "TVM_DRPAI_HRNETV2") + { + aiName.innerHTML = "HRNetV2: Hand Landmark Localization"; + caution.innerHTML= "Please adjust the camera so that the hand above the wrist is inside the box."; + caution.style.color="#FFD700"; + aiDescription.innerHTML = "Single hand only. It does not support more than one hand.
"; + aiDescription.style.color =""; + model_id = ID_HRNETV2; + } + else if (event.currentTarget.value == "TVM_DRPAI_GOOGLENET") + { + aiName.innerHTML = "GoogleNet: Classification"; + caution.innerHTML= "Classify object in the frame."; + caution.style.color=""; + aiDescription.innerHTML = "
"; + aiDescription.style.color =""; + model_id = ID_HRNETV2; + } + else if (event.currentTarget.value == "TVM_DRPAI_EMOTIONFP") + { + aiName.innerHTML = "Emotion FERPlus: Emotion Recognition"; + caution.innerHTML= "Classify the human face expression. For face detection, UltraFace is used."; + caution.style.color=""; + aiDescription.innerHTML ="Pre-processing time includes entire Face Detection and Emotion FERPlus pre-processing. All processing time is cummurative value for detected boxes.
"; + aiDescription.style.color ="#FFD700"; + model_id = ID_EMOTIONFP; } } @@ -206,6 +261,26 @@ function measureProcessingTime(ctx, nowTime) { // console.log('----------------------------------------'); } +/* application message */ +let disp_application_message = null; + +let is_dialog_shown= false; +$('#dialog').on('show.bs.modal', function (e) { + is_dialog_shown=true; + console.log('shown'); + //document.getElementById('dialog-text').innerHTML = disp_application_message.join("
"); + if(disp_application_message!=null) + { + document.getElementById('dialog-text').innerText = disp_application_message; + } +}); +$('#dialog').on('hidden.bs.modal', function (e) { + is_dialog_shown=false; + console.log('hidden'); + //disp_application_message.splice(0); + disp_application_message=null; +}); + $(() => { socket.onmessage = function (event) { // Calculate process time @@ -261,7 +336,12 @@ $(() => { predCtx.strokeStyle = 'blue'; predCtx.fillStyle = 'blue'; defaultCtx.fillStyle = 'blue'; - + if ((model_id == ID_ULTRAFACE) || (model_id == ID_EMOTIONFP)) + { + predCtx.strokeStyle = 'yellow'; + predCtx.fillStyle = 'yellow'; + defaultCtx.fillStyle = 'yellow'; + } predData = datas.Value.predict; len = predData.length; @@ -282,7 +362,7 @@ $(() => { h[i] = Number(predStr.H); if (i !== 0) { - if (cls[i] == "") + if (model_id == ID_ULTRAFACE ) { predDatas[i] = '\n' + pred[i] + ' %\t' + 'X: ' + (x[i] + " ").slice(0, 4)+ @@ -296,7 +376,7 @@ $(() => { } } else { - if (cls[i] == "") + if (model_id == ID_ULTRAFACE ) { predDatas[i] = pred[i] + ' %\t' + 'X: ' + (x[i] + " ").slice(0, 4)+ @@ -333,7 +413,7 @@ $(() => { measureProcessingTime(predCtx, nowTime); } - // Calculate & Display DRP process time + // Calculate & Display process time drpData = 'Inference time:' + '\t' + Number.parseFloat(datas.Value.drp_time).toFixed(2) + ' ms\n' + 'Pre-process time:' + '\t' + Number.parseFloat(datas.Value.pre_time).toFixed(2) + ' ms\n' + 'Post-process time:' + '\t' + Number.parseFloat(datas.Value.post_time).toFixed(2) + ' ms\n'; @@ -344,9 +424,9 @@ $(() => { // HRNet else if (datas.command_name === 'pose_detection') { predCtx.linewidth = 8; - predCtx.strokeStyle = "#FFF450";//'yellow'; + predCtx.strokeStyle = "yellow"; predCtx.fillStyle = 'yellow'; - defaultCtx.fillStyle = "#FFF450";//'yellow'; + defaultCtx.fillStyle = "yellow"; predData = datas.Value.predict; len = predData.length; @@ -370,7 +450,7 @@ $(() => { predCtx.drawImage(webcam, 0, 0, predCanvas.width, predCanvas.height); - if(len > 17) { + if (model_id == ID_DEEPPOSE) { // Draw Inference Crop Range (VGA: X = 185/ Y = 0/ Width = 270/ Height = 480) predCtx.strokeRect(80 * ratio_w, 2 * ratio_h, 480 * ratio_w, 476 * ratio_h); predCtx.fillText("Please fit your face into this yellow box.", (80 + 5) * ratio_w, (datas.Value.img_org_h - 5) * ratio_h); @@ -384,7 +464,52 @@ $(() => { drawKeyPoint(predCtx, predData[i], ratio_w, ratio_h); } } - else { + else if (model_id == ID_HRNETV2) { + // Draw Inference Crop Range (VGA: X = 80/ Y = 0/ Width = 480/ Height = 480) + predCtx.strokeRect(80 * ratio_w, 0 * ratio_h, 480 * ratio_w, 480 * ratio_h); + predCtx.fillText("Please align this position", (80 + 5) * ratio_w, (datas.Value.img_org_h - 5) * ratio_h); + + // Draw hand + predCtx.beginPath(); + predCtx.strokeStyle = 'aqua'; + drawLine(predCtx, predData, 0, 1, ratio_w, ratio_h); + drawLine(predCtx, predData, 1, 2, ratio_w, ratio_h); + drawLine(predCtx, predData, 2, 3, ratio_w, ratio_h); + drawLine(predCtx, predData, 3, 4, ratio_w, ratio_h); + predCtx.beginPath(); + predCtx.strokeStyle = 'fuchsia'; + drawLine(predCtx, predData, 0, 5, ratio_w, ratio_h); + drawLine(predCtx, predData, 5, 6, ratio_w, ratio_h); + drawLine(predCtx, predData, 6, 7, ratio_w, ratio_h); + drawLine(predCtx, predData, 7, 8, ratio_w, ratio_h); + predCtx.beginPath(); + predCtx.strokeStyle = 'yellow'; + drawLine(predCtx, predData, 0, 9, ratio_w, ratio_h); + drawLine(predCtx, predData, 9, 10, ratio_w, ratio_h); + drawLine(predCtx, predData, 10, 11, ratio_w, ratio_h); + drawLine(predCtx, predData, 11, 12, ratio_w, ratio_h); + predCtx.beginPath(); + predCtx.strokeStyle = 'blue'; + drawLine(predCtx, predData, 0, 13, ratio_w, ratio_h); + drawLine(predCtx, predData, 13, 14, ratio_w, ratio_h); + drawLine(predCtx, predData, 14, 15, ratio_w, ratio_h); + drawLine(predCtx, predData, 15, 16, ratio_w, ratio_h); + predCtx.beginPath(); + predCtx.strokeStyle = 'lime'; + drawLine(predCtx, predData, 0, 17, ratio_w, ratio_h); + drawLine(predCtx, predData, 17, 18, ratio_w, ratio_h); + drawLine(predCtx, predData, 18, 19, ratio_w, ratio_h); + drawLine(predCtx, predData, 19, 20, ratio_w, ratio_h); + + console.log('ratio_w ' + ratio_w); + console.log('ratio_h ' + ratio_h); + + // Draw keypoint and display inference info + for (i = 0; i < len; i++) { + drawKeyPoint(predCtx, predData[i], ratio_w, ratio_h); + } + } + else if (model_id == ID_HRNET) { // Draw Inference Crop Range (VGA: X = 184/ Y = 0/ Width = 270/ Height = 480) predCtx.strokeRect(184 * ratio_w, 0 * ratio_h, 270 * ratio_w, 480 * ratio_h); predCtx.fillText("Please stand here", (185 + 5) * ratio_w, (datas.Value.img_org_h - 5) * ratio_h); @@ -424,7 +549,7 @@ $(() => { measureProcessingTime(predCtx, nowTime); } - // Calculate & Display DRP process time + // Calculate & Display process time drpData = 'Inference time:' + '\t' + Number.parseFloat(datas.Value.drp_time).toFixed(2) + ' ms\n' + 'Pre-process time:' + '\t' + Number.parseFloat(datas.Value.pre_time).toFixed(2) + ' ms\n' + 'Post-process time:' + '\t' + Number.parseFloat(datas.Value.post_time).toFixed(2) + ' ms\n'; @@ -433,8 +558,8 @@ $(() => { predWindowData.value = predDatas; drpWindowData.value = drpData; } - // ResNet50 - else if (datas.command_name === 'classfication_detection') { + // GoogleNet + else if (datas.command_name === 'classification') { predCtx.linewidth = 8; predCtx.strokeStyle = 'red'; predCtx.fillStyle = 'red'; @@ -452,10 +577,10 @@ $(() => { pred[i] = Number.parseFloat(predStr.pred).toFixed(2); if (i !== 0) { - predDatas[i] = '\n' + cls[i] + ' :\t' + pred[i] + ' %'; + predDatas[i] = '\n' + pred[i] + '% :\t' + cls[i] + ' '; } else { - predDatas[i] = cls[i] + ' :\t' + pred[i] + ' %'; + predDatas[i] = pred[i] + '% :\t' + cls[i]; } } @@ -473,7 +598,7 @@ $(() => { measureProcessingTime(predCtx, nowTime); } - // Calculate & Display DRP process time + // Calculate & Display process time drpData = 'Inference time:' + '\t' + Number.parseFloat(datas.Value.drp_time).toFixed(2) + ' ms\n' + 'Pre-process time:' + '\t' + Number.parseFloat(datas.Value.pre_time).toFixed(2) + ' ms\n' + 'Post-process time:' + '\t' + Number.parseFloat(datas.Value.post_time).toFixed(2) + ' ms\n'; @@ -481,5 +606,19 @@ $(() => { predWindowData.value = predDatas; drpWindowData.value = drpData; } + else if(datas.command_name === 'app_message') + { + console.debug(datas.Value.message); + if(disp_application_message ==null) + { + disp_application_message = datas.Value.message; + } + if(is_dialog_shown ==false) + { + $('#dialog').modal({ + backdrop: 'static' + }); + } + } } }) diff --git a/how-to/sample_app/exe/face_deeppose_cpu/deploy.json b/how-to/sample_app/exe/face_deeppose_cpu/deploy.json deleted file mode 100644 index 05f3c57..0000000 --- a/how-to/sample_app/exe/face_deeppose_cpu/deploy.json +++ /dev/null @@ -1,3180 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input0", - "inputs": [] - }, - { - "op": "null", - "name": "p0", - "inputs": [] - }, - { - "op": "null", - "name": "p1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "698decc84a51cfe3" - }, - "inputs": [ - [ - 0, - 0, - 0 - ], - [ - 1, - 0, - 0 - ], - [ - 2, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_max_pool2d", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_max_pool2d", - "layout": "NCHW", - "hash": "49cf2f1568029ed3" - }, - "inputs": [ - [ - 3, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p2", - "inputs": [] - }, - { - "op": "null", - "name": "p3", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_1", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "adce3190c45bc83c" - }, - "inputs": [ - [ - 4, - 0, - 0 - ], - [ - 5, - 0, - 0 - ], - [ - 6, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p4", - "inputs": [] - }, - { - "op": "null", - "name": "p5", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_2", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "3cfa8fd901b2f1d7" - }, - "inputs": [ - [ - 7, - 0, - 0 - ], - [ - 8, - 0, - 0 - ], - [ - 9, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p6", - "inputs": [] - }, - { - "op": "null", - "name": "p7", - "inputs": [] - }, - { - "op": "null", - "name": "p8", - "inputs": [] - }, - { - "op": "null", - "name": "p9", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "f26ce6547ec5d262" - }, - "inputs": [ - [ - 4, - 0, - 0 - ], - [ - 13, - 0, - 0 - ], - [ - 14, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "21e23c0405085ac0" - }, - "inputs": [ - [ - 10, - 0, - 0 - ], - [ - 11, - 0, - 0 - ], - [ - 12, - 0, - 0 - ], - [ - 15, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p10", - "inputs": [] - }, - { - "op": "null", - "name": "p11", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_3", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "40ee35ec0f6ab5ba" - }, - "inputs": [ - [ - 16, - 0, - 0 - ], - [ - 17, - 0, - 0 - ], - [ - 18, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p12", - "inputs": [] - }, - { - "op": "null", - "name": "p13", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_21", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "3cfa8fd901b2f1d7" - }, - "inputs": [ - [ - 19, - 0, - 0 - ], - [ - 20, - 0, - 0 - ], - [ - 21, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p14", - "inputs": [] - }, - { - "op": "null", - "name": "p15", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu1", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "21e23c0405085ac0" - }, - "inputs": [ - [ - 22, - 0, - 0 - ], - [ - 23, - 0, - 0 - ], - [ - 24, - 0, - 0 - ], - [ - 16, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p16", - "inputs": [] - }, - { - "op": "null", - "name": "p17", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_31", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "40ee35ec0f6ab5ba" - }, - "inputs": [ - [ - 25, - 0, - 0 - ], - [ - 26, - 0, - 0 - ], - [ - 27, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p18", - "inputs": [] - }, - { - "op": "null", - "name": "p19", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_22", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "3cfa8fd901b2f1d7" - }, - "inputs": [ - [ - 28, - 0, - 0 - ], - [ - 29, - 0, - 0 - ], - [ - 30, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p20", - "inputs": [] - }, - { - "op": "null", - "name": "p21", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu2", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "21e23c0405085ac0" - }, - "inputs": [ - [ - 31, - 0, - 0 - ], - [ - 32, - 0, - 0 - ], - [ - 33, - 0, - 0 - ], - [ - 25, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p22", - "inputs": [] - }, - { - "op": "null", - "name": "p23", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_4", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_4", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "d079598f1d7b394a" - }, - "inputs": [ - [ - 34, - 0, - 0 - ], - [ - 35, - 0, - 0 - ], - [ - 36, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p24", - "inputs": [] - }, - { - "op": "null", - "name": "p25", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_5", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_5", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "1031d16c1adfb7b0" - }, - "inputs": [ - [ - 37, - 0, - 0 - ], - [ - 38, - 0, - 0 - ], - [ - 39, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p26", - "inputs": [] - }, - { - "op": "null", - "name": "p27", - "inputs": [] - }, - { - "op": "null", - "name": "p28", - "inputs": [] - }, - { - "op": "null", - "name": "p29", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_1", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "19b52525823c8bda" - }, - "inputs": [ - [ - 34, - 0, - 0 - ], - [ - 43, - 0, - 0 - ], - [ - 44, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_1", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "0969ece4432b811e" - }, - "inputs": [ - [ - 40, - 0, - 0 - ], - [ - 41, - 0, - 0 - ], - [ - 42, - 0, - 0 - ], - [ - 45, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p30", - "inputs": [] - }, - { - "op": "null", - "name": "p31", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_6", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_6", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "d73f4a6a3dee112a" - }, - "inputs": [ - [ - 46, - 0, - 0 - ], - [ - 47, - 0, - 0 - ], - [ - 48, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p32", - "inputs": [] - }, - { - "op": "null", - "name": "p33", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_7", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_7", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "de429062ba3cf4d6" - }, - "inputs": [ - [ - 49, - 0, - 0 - ], - [ - 50, - 0, - 0 - ], - [ - 51, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p34", - "inputs": [] - }, - { - "op": "null", - "name": "p35", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_11", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "0969ece4432b811e" - }, - "inputs": [ - [ - 52, - 0, - 0 - ], - [ - 53, - 0, - 0 - ], - [ - 54, - 0, - 0 - ], - [ - 46, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p36", - "inputs": [] - }, - { - "op": "null", - "name": "p37", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_61", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_6", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "d73f4a6a3dee112a" - }, - "inputs": [ - [ - 55, - 0, - 0 - ], - [ - 56, - 0, - 0 - ], - [ - 57, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p38", - "inputs": [] - }, - { - "op": "null", - "name": "p39", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_71", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_7", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "de429062ba3cf4d6" - }, - "inputs": [ - [ - 58, - 0, - 0 - ], - [ - 59, - 0, - 0 - ], - [ - 60, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p40", - "inputs": [] - }, - { - "op": "null", - "name": "p41", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_12", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "0969ece4432b811e" - }, - "inputs": [ - [ - 61, - 0, - 0 - ], - [ - 62, - 0, - 0 - ], - [ - 63, - 0, - 0 - ], - [ - 55, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p42", - "inputs": [] - }, - { - "op": "null", - "name": "p43", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_62", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_6", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "d73f4a6a3dee112a" - }, - "inputs": [ - [ - 64, - 0, - 0 - ], - [ - 65, - 0, - 0 - ], - [ - 66, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p44", - "inputs": [] - }, - { - "op": "null", - "name": "p45", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_72", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_7", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "de429062ba3cf4d6" - }, - "inputs": [ - [ - 67, - 0, - 0 - ], - [ - 68, - 0, - 0 - ], - [ - 69, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p46", - "inputs": [] - }, - { - "op": "null", - "name": "p47", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_13", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "0969ece4432b811e" - }, - "inputs": [ - [ - 70, - 0, - 0 - ], - [ - 71, - 0, - 0 - ], - [ - 72, - 0, - 0 - ], - [ - 64, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p48", - "inputs": [] - }, - { - "op": "null", - "name": "p49", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_8", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_8", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "604a6fc7e26e23b6" - }, - "inputs": [ - [ - 73, - 0, - 0 - ], - [ - 74, - 0, - 0 - ], - [ - 75, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p50", - "inputs": [] - }, - { - "op": "null", - "name": "p51", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_9", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_9", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "648dd4c33a5d215a" - }, - "inputs": [ - [ - 76, - 0, - 0 - ], - [ - 77, - 0, - 0 - ], - [ - 78, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p52", - "inputs": [] - }, - { - "op": "null", - "name": "p53", - "inputs": [] - }, - { - "op": "null", - "name": "p54", - "inputs": [] - }, - { - "op": "null", - "name": "p55", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_2", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "15f852df8dc8197b" - }, - "inputs": [ - [ - 73, - 0, - 0 - ], - [ - 82, - 0, - 0 - ], - [ - 83, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 79, - 0, - 0 - ], - [ - 80, - 0, - 0 - ], - [ - 81, - 0, - 0 - ], - [ - 84, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p56", - "inputs": [] - }, - { - "op": "null", - "name": "p57", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "9809334dcc248099" - }, - "inputs": [ - [ - 85, - 0, - 0 - ], - [ - 86, - 0, - 0 - ], - [ - 87, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p58", - "inputs": [] - }, - { - "op": "null", - "name": "p59", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "4987d421a8ad8fd9" - }, - "inputs": [ - [ - 88, - 0, - 0 - ], - [ - 89, - 0, - 0 - ], - [ - 90, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p60", - "inputs": [] - }, - { - "op": "null", - "name": "p61", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_21", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 91, - 0, - 0 - ], - [ - 92, - 0, - 0 - ], - [ - 93, - 0, - 0 - ], - [ - 85, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p62", - "inputs": [] - }, - { - "op": "null", - "name": "p63", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_101", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "9809334dcc248099" - }, - "inputs": [ - [ - 94, - 0, - 0 - ], - [ - 95, - 0, - 0 - ], - [ - 96, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p64", - "inputs": [] - }, - { - "op": "null", - "name": "p65", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_111", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "4987d421a8ad8fd9" - }, - "inputs": [ - [ - 97, - 0, - 0 - ], - [ - 98, - 0, - 0 - ], - [ - 99, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p66", - "inputs": [] - }, - { - "op": "null", - "name": "p67", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_22", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 100, - 0, - 0 - ], - [ - 101, - 0, - 0 - ], - [ - 102, - 0, - 0 - ], - [ - 94, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p68", - "inputs": [] - }, - { - "op": "null", - "name": "p69", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_102", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "9809334dcc248099" - }, - "inputs": [ - [ - 103, - 0, - 0 - ], - [ - 104, - 0, - 0 - ], - [ - 105, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p70", - "inputs": [] - }, - { - "op": "null", - "name": "p71", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_112", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "4987d421a8ad8fd9" - }, - "inputs": [ - [ - 106, - 0, - 0 - ], - [ - 107, - 0, - 0 - ], - [ - 108, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p72", - "inputs": [] - }, - { - "op": "null", - "name": "p73", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_23", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 109, - 0, - 0 - ], - [ - 110, - 0, - 0 - ], - [ - 111, - 0, - 0 - ], - [ - 103, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p74", - "inputs": [] - }, - { - "op": "null", - "name": "p75", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_103", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "9809334dcc248099" - }, - "inputs": [ - [ - 112, - 0, - 0 - ], - [ - 113, - 0, - 0 - ], - [ - 114, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p76", - "inputs": [] - }, - { - "op": "null", - "name": "p77", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_113", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "4987d421a8ad8fd9" - }, - "inputs": [ - [ - 115, - 0, - 0 - ], - [ - 116, - 0, - 0 - ], - [ - 117, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p78", - "inputs": [] - }, - { - "op": "null", - "name": "p79", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_24", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 118, - 0, - 0 - ], - [ - 119, - 0, - 0 - ], - [ - 120, - 0, - 0 - ], - [ - 112, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p80", - "inputs": [] - }, - { - "op": "null", - "name": "p81", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_104", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_10", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "9809334dcc248099" - }, - "inputs": [ - [ - 121, - 0, - 0 - ], - [ - 122, - 0, - 0 - ], - [ - 123, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p82", - "inputs": [] - }, - { - "op": "null", - "name": "p83", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_114", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_11", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "4987d421a8ad8fd9" - }, - "inputs": [ - [ - 124, - 0, - 0 - ], - [ - 125, - 0, - 0 - ], - [ - 126, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p84", - "inputs": [] - }, - { - "op": "null", - "name": "p85", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_25", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_2", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "46a13d64378dabfc" - }, - "inputs": [ - [ - 127, - 0, - 0 - ], - [ - 128, - 0, - 0 - ], - [ - 129, - 0, - 0 - ], - [ - 121, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p86", - "inputs": [] - }, - { - "op": "null", - "name": "p87", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_12", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_12", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "5a1ee1f6b935dcc0" - }, - "inputs": [ - [ - 130, - 0, - 0 - ], - [ - 131, - 0, - 0 - ], - [ - 132, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p88", - "inputs": [] - }, - { - "op": "null", - "name": "p89", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_13", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_13", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "e53dce5097f01729" - }, - "inputs": [ - [ - 133, - 0, - 0 - ], - [ - 134, - 0, - 0 - ], - [ - 135, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p90", - "inputs": [] - }, - { - "op": "null", - "name": "p91", - "inputs": [] - }, - { - "op": "null", - "name": "p92", - "inputs": [] - }, - { - "op": "null", - "name": "p93", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_3", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "e1cb4c9b9c9cbaec" - }, - "inputs": [ - [ - 130, - 0, - 0 - ], - [ - 139, - 0, - 0 - ], - [ - 140, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_3", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "43abdf642e22489a" - }, - "inputs": [ - [ - 136, - 0, - 0 - ], - [ - 137, - 0, - 0 - ], - [ - 138, - 0, - 0 - ], - [ - 141, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p94", - "inputs": [] - }, - { - "op": "null", - "name": "p95", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_14", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_14", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "43be2b9166ea4d2a" - }, - "inputs": [ - [ - 142, - 0, - 0 - ], - [ - 143, - 0, - 0 - ], - [ - 144, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p96", - "inputs": [] - }, - { - "op": "null", - "name": "p97", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_15", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_15", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "e3f44326f571f4fe" - }, - "inputs": [ - [ - 145, - 0, - 0 - ], - [ - 146, - 0, - 0 - ], - [ - 147, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p98", - "inputs": [] - }, - { - "op": "null", - "name": "p99", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_31", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "43abdf642e22489a" - }, - "inputs": [ - [ - 148, - 0, - 0 - ], - [ - 149, - 0, - 0 - ], - [ - 150, - 0, - 0 - ], - [ - 142, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p100", - "inputs": [] - }, - { - "op": "null", - "name": "p101", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_141", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_14", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "43be2b9166ea4d2a" - }, - "inputs": [ - [ - 151, - 0, - 0 - ], - [ - 152, - 0, - 0 - ], - [ - 153, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p102", - "inputs": [] - }, - { - "op": "null", - "name": "p103", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_151", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_nn_relu_15", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "e3f44326f571f4fe" - }, - "inputs": [ - [ - 154, - 0, - 0 - ], - [ - 155, - 0, - 0 - ], - [ - 156, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p104", - "inputs": [] - }, - { - "op": "null", - "name": "p105", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_32", - "attrs": { - "num_outputs": "1", - "num_inputs": "4", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_add_nn_relu_3", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "43abdf642e22489a" - }, - "inputs": [ - [ - 157, - 0, - 0 - ], - [ - 158, - 0, - 0 - ], - [ - 159, - 0, - 0 - ], - [ - 151, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_adaptive_avg_pool2d", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_adaptive_avg_pool2d", - "layout": "NCHW", - "hash": "908e18ee0d823547" - }, - "inputs": [ - [ - 160, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "reshape_nop", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "__nop", - "hash": "21a54c75bd195367" - }, - "inputs": [ - [ - 161, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p106", - "inputs": [] - }, - { - "op": "null", - "name": "p107", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_contrib_dense_pack_add", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_contrib_dense_pack_add", - "hash": "bb4a7fb20003ebfa" - }, - "inputs": [ - [ - 162, - 0, - 0 - ], - [ - 163, - 0, - 0 - ], - [ - 164, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "reshape_nop", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "__nop", - "hash": "fdada9dcf8b6056b" - }, - "inputs": [ - [ - 165, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [ - 0, - 1, - 2, - 5, - 6, - 8, - 9, - 11, - 12, - 13, - 14, - 17, - 18, - 20, - 21, - 23, - 24, - 26, - 27, - 29, - 30, - 32, - 33, - 35, - 36, - 38, - 39, - 41, - 42, - 43, - 44, - 47, - 48, - 50, - 51, - 53, - 54, - 56, - 57, - 59, - 60, - 62, - 63, - 65, - 66, - 68, - 69, - 71, - 72, - 74, - 75, - 77, - 78, - 80, - 81, - 82, - 83, - 86, - 87, - 89, - 90, - 92, - 93, - 95, - 96, - 98, - 99, - 101, - 102, - 104, - 105, - 107, - 108, - 110, - 111, - 113, - 114, - 116, - 117, - 119, - 120, - 122, - 123, - 125, - 126, - 128, - 129, - 131, - 132, - 134, - 135, - 137, - 138, - 139, - 140, - 143, - 144, - 146, - 147, - 149, - 150, - 152, - 153, - 155, - 156, - 158, - 159, - 163, - 164 - ], - "heads": [ - [ - 166, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32", - "float32" - ] - ], - "storage_id": [ - "list_int", - [ - 0, - 1, - 2, - 3, - 4, - 5, - 6, - 3, - 7, - 8, - 9, - 10, - 11, - 12, - 13, - 3, - 4, - 14, - 15, - 9, - 16, - 17, - 3, - 18, - 19, - 9, - 20, - 21, - 3, - 22, - 23, - 4, - 24, - 25, - 3, - 26, - 27, - 4, - 28, - 29, - 9, - 30, - 31, - 32, - 33, - 4, - 3, - 34, - 35, - 9, - 36, - 37, - 4, - 38, - 39, - 9, - 40, - 41, - 4, - 42, - 43, - 3, - 44, - 45, - 4, - 46, - 47, - 3, - 48, - 49, - 9, - 50, - 51, - 3, - 52, - 53, - 9, - 54, - 55, - 4, - 56, - 57, - 58, - 59, - 9, - 3, - 60, - 61, - 4, - 62, - 63, - 9, - 64, - 65, - 4, - 66, - 67, - 9, - 68, - 69, - 3, - 70, - 71, - 9, - 72, - 73, - 3, - 74, - 75, - 4, - 76, - 77, - 3, - 78, - 79, - 4, - 80, - 81, - 9, - 82, - 83, - 4, - 84, - 85, - 9, - 86, - 87, - 3, - 88, - 89, - 9, - 90, - 91, - 3, - 92, - 93, - 94, - 95, - 96, - 97, - 98, - 4, - 3, - 99, - 100, - 94, - 101, - 102, - 103, - 104, - 105, - 9, - 106, - 107, - 94, - 108, - 109, - 103, - 110, - 111, - 4, - 94, - 94, - 112, - 113, - 114, - 114 - ] - ], - "shape": [ - "list_shape", - [ - [1, 3, 256, 256], - [64, 3, 7, 7], - [1, 64, 1, 1], - [1, 64, 128, 128], - [1, 64, 64, 64], - [64, 64, 1, 1], - [1, 64, 1, 1], - [1, 64, 64, 64], - [64, 64, 3, 3], - [1, 64, 1, 1], - [1, 64, 64, 64], - [256, 64, 1, 1], - [1, 256, 1, 1], - [256, 64, 1, 1], - [1, 256, 1, 1], - [1, 256, 64, 64], - [1, 256, 64, 64], - [64, 256, 1, 1], - [1, 64, 1, 1], - [1, 64, 64, 64], - [64, 64, 3, 3], - [1, 64, 1, 1], - [1, 64, 64, 64], - [256, 64, 1, 1], - [1, 256, 1, 1], - [1, 256, 64, 64], - [64, 256, 1, 1], - [1, 64, 1, 1], - [1, 64, 64, 64], - [64, 64, 3, 3], - [1, 64, 1, 1], - [1, 64, 64, 64], - [256, 64, 1, 1], - [1, 256, 1, 1], - [1, 256, 64, 64], - [128, 256, 1, 1], - [1, 128, 1, 1], - [1, 128, 64, 64], - [128, 128, 3, 3], - [1, 128, 1, 1], - [1, 128, 32, 32], - [512, 128, 1, 1], - [1, 512, 1, 1], - [512, 256, 1, 1], - [1, 512, 1, 1], - [1, 512, 32, 32], - [1, 512, 32, 32], - [128, 512, 1, 1], - [1, 128, 1, 1], - [1, 128, 32, 32], - [128, 128, 3, 3], - [1, 128, 1, 1], - [1, 128, 32, 32], - [512, 128, 1, 1], - [1, 512, 1, 1], - [1, 512, 32, 32], - [128, 512, 1, 1], - [1, 128, 1, 1], - [1, 128, 32, 32], - [128, 128, 3, 3], - [1, 128, 1, 1], - [1, 128, 32, 32], - [512, 128, 1, 1], - [1, 512, 1, 1], - [1, 512, 32, 32], - [128, 512, 1, 1], - [1, 128, 1, 1], - [1, 128, 32, 32], - [128, 128, 3, 3], - [1, 128, 1, 1], - [1, 128, 32, 32], - [512, 128, 1, 1], - [1, 512, 1, 1], - [1, 512, 32, 32], - [256, 512, 1, 1], - [1, 256, 1, 1], - [1, 256, 32, 32], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1024, 512, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [1, 1024, 16, 16], - [256, 1024, 1, 1], - [1, 256, 1, 1], - [1, 256, 16, 16], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [256, 1024, 1, 1], - [1, 256, 1, 1], - [1, 256, 16, 16], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [256, 1024, 1, 1], - [1, 256, 1, 1], - [1, 256, 16, 16], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [256, 1024, 1, 1], - [1, 256, 1, 1], - [1, 256, 16, 16], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [256, 1024, 1, 1], - [1, 256, 1, 1], - [1, 256, 16, 16], - [256, 256, 3, 3], - [1, 256, 1, 1], - [1, 256, 16, 16], - [1024, 256, 1, 1], - [1, 1024, 1, 1], - [1, 1024, 16, 16], - [512, 1024, 1, 1], - [1, 512, 1, 1], - [1, 512, 16, 16], - [512, 512, 3, 3], - [1, 512, 1, 1], - [1, 512, 8, 8], - [2048, 512, 1, 1], - [1, 2048, 1, 1], - [2048, 1024, 1, 1], - [1, 2048, 1, 1], - [1, 2048, 8, 8], - [1, 2048, 8, 8], - [512, 2048, 1, 1], - [1, 512, 1, 1], - [1, 512, 8, 8], - [512, 512, 3, 3], - [1, 512, 1, 1], - [1, 512, 8, 8], - [2048, 512, 1, 1], - [1, 2048, 1, 1], - [1, 2048, 8, 8], - [512, 2048, 1, 1], - [1, 512, 1, 1], - [1, 512, 8, 8], - [512, 512, 3, 3], - [1, 512, 1, 1], - [1, 512, 8, 8], - [2048, 512, 1, 1], - [1, 2048, 1, 1], - [1, 2048, 8, 8], - [1, 2048, 1, 1], - [1, 2048], - [14, 2048, 14], - [196], - [1, 196], - [1, 98, 2] - ] - ] - }, - "node_row_ptr": [ - 0, - 1, - 2, - 3, - 4, - 5, - 6, - 7, - 8, - 9, - 10, - 11, - 12, - 13, - 14, - 15, - 16, - 17, - 18, - 19, - 20, - 21, - 22, - 23, - 24, - 25, - 26, - 27, - 28, - 29, - 30, - 31, - 32, - 33, - 34, - 35, - 36, - 37, - 38, - 39, - 40, - 41, - 42, - 43, - 44, - 45, - 46, - 47, - 48, - 49, - 50, - 51, - 52, - 53, - 54, - 55, - 56, - 57, - 58, - 59, - 60, - 61, - 62, - 63, - 64, - 65, - 66, - 67, - 68, - 69, - 70, - 71, - 72, - 73, - 74, - 75, - 76, - 77, - 78, - 79, - 80, - 81, - 82, - 83, - 84, - 85, - 86, - 87, - 88, - 89, - 90, - 91, - 92, - 93, - 94, - 95, - 96, - 97, - 98, - 99, - 100, - 101, - 102, - 103, - 104, - 105, - 106, - 107, - 108, - 109, - 110, - 111, - 112, - 113, - 114, - 115, - 116, - 117, - 118, - 119, - 120, - 121, - 122, - 123, - 124, - 125, - 126, - 127, - 128, - 129, - 130, - 131, - 132, - 133, - 134, - 135, - 136, - 137, - 138, - 139, - 140, - 141, - 142, - 143, - 144, - 145, - 146, - 147, - 148, - 149, - 150, - 151, - 152, - 153, - 154, - 155, - 156, - 157, - 158, - 159, - 160, - 161, - 162, - 163, - 164, - 165, - 166, - 167 - ] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/face_deeppose_cpu/deploy.params b/how-to/sample_app/exe/face_deeppose_cpu/deploy.params deleted file mode 100644 index 502ffbd..0000000 Binary files a/how-to/sample_app/exe/face_deeppose_cpu/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/face_deeppose_cpu/deploy.so b/how-to/sample_app/exe/face_deeppose_cpu/deploy.so deleted file mode 100644 index 6890ac8..0000000 Binary files a/how-to/sample_app/exe/face_deeppose_cpu/deploy.so and /dev/null differ diff --git a/how-to/sample_app/exe/face_deeppose_pt/deploy.json b/how-to/sample_app/exe/face_deeppose_pt/deploy.json deleted file mode 100644 index fd0b26c..0000000 --- a/how-to/sample_app/exe/face_deeppose_pt/deploy.json +++ /dev/null @@ -1,75 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input0", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "reshape_nop", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "__nop", - "hash": "b57b54c7f46c19e0" - }, - "inputs": [ - [ - 1, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 2, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16", - "float16" - ] - ], - "storage_id": [ - "list_int", - [0, 1, 1] - ], - "shape": [ - "list_shape", - [ - [1, 3, 256, 256], - [1, 196], - [1, 98, 2] - ] - ] - }, - "node_row_ptr": [0, 1, 2, 3] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/face_deeppose_pt/deploy.params b/how-to/sample_app/exe/face_deeppose_pt/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/face_deeppose_pt/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/face_deeppose_pt/deploy.so b/how-to/sample_app/exe/face_deeppose_pt/deploy.so deleted file mode 100644 index afe2f98..0000000 Binary files a/how-to/sample_app/exe/face_deeppose_pt/deploy.so and /dev/null differ diff --git a/how-to/sample_app/exe/hrnet_onnx/deploy.json b/how-to/sample_app/exe/hrnet_onnx/deploy.json deleted file mode 100644 index d16d5c7..0000000 --- a/how-to/sample_app/exe/hrnet_onnx/deploy.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input.1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 1, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16" - ] - ], - "storage_id": [ - "list_int", - [0, 1] - ], - "shape": [ - "list_shape", - [ - [1, 3, 256, 192], - [1, 17, 64, 48] - ] - ] - }, - "node_row_ptr": [0, 1, 2] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/hrnet_onnx/deploy.params b/how-to/sample_app/exe/hrnet_onnx/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/hrnet_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/hrnet_onnx/deploy.so b/how-to/sample_app/exe/hrnet_onnx/deploy.so deleted file mode 100644 index e264c4b..0000000 --- a/how-to/sample_app/exe/hrnet_onnx/deploy.so +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bf46e5572cf4f143cccc52c7f478686d037a3b9f3457c4b04de5cb5ac783160d -size 115701544 diff --git a/how-to/sample_app/exe/sample_app_drpai_tvm_usbcam_http b/how-to/sample_app/exe/sample_app_drpai_tvm_usbcam_http index 7baf0a9..19fab8b 100644 Binary files a/how-to/sample_app/exe/sample_app_drpai_tvm_usbcam_http and b/how-to/sample_app/exe/sample_app_drpai_tvm_usbcam_http differ diff --git a/how-to/sample_app/exe/synset_words_imagenet.txt b/how-to/sample_app/exe/synset_words_imagenet.txt new file mode 100755 index 0000000..722c984 --- /dev/null +++ b/how-to/sample_app/exe/synset_words_imagenet.txt @@ -0,0 +1,1000 @@ +tench, Tinca tinca +goldfish, Carassius auratus +great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias +tiger shark, Galeocerdo cuvieri +hammerhead, hammerhead shark +electric ray, crampfish, numbfish, torpedo +stingray +cock +hen +ostrich, Struthio camelus +brambling, Fringilla montifringilla +goldfinch, Carduelis carduelis +house finch, linnet, Carpodacus mexicanus +junco, snowbird +indigo bunting, indigo finch, indigo bird, Passerina cyanea +robin, American robin, Turdus migratorius +bulbul +jay +magpie +chickadee +water ouzel, dipper +kite +bald eagle, American eagle, Haliaeetus leucocephalus +vulture +great grey owl, great gray owl, Strix nebulosa +European fire salamander, Salamandra salamandra +common newt, Triturus vulgaris +eft +spotted salamander, Ambystoma maculatum +axolotl, mud puppy, Ambystoma mexicanum +bullfrog, Rana catesbeiana +tree frog, tree-frog +tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui +loggerhead, loggerhead turtle, Caretta caretta +leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea +mud turtle +terrapin +box turtle, box tortoise +banded gecko +common iguana, iguana, Iguana iguana +American chameleon, anole, Anolis carolinensis +whiptail, whiptail lizard +agama +frilled lizard, Chlamydosaurus kingi +alligator lizard +Gila monster, Heloderma suspectum +green lizard, Lacerta viridis +African chameleon, Chamaeleo chamaeleon +Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis +African crocodile, Nile crocodile, Crocodylus niloticus +American alligator, Alligator mississipiensis +triceratops +thunder snake, worm snake, Carphophis amoenus +ringneck snake, ring-necked snake, ring snake +hognose snake, puff adder, sand viper +green snake, grass snake +king snake, kingsnake +garter snake, grass snake +water snake +vine snake +night snake, Hypsiglena torquata +boa constrictor, Constrictor constrictor +rock python, rock snake, Python sebae +Indian cobra, Naja naja +green mamba +sea snake +horned viper, cerastes, sand viper, horned asp, Cerastes cornutus +diamondback, diamondback rattlesnake, Crotalus adamanteus +sidewinder, horned rattlesnake, Crotalus cerastes +trilobite +harvestman, daddy longlegs, Phalangium opilio +scorpion +black and gold garden spider, Argiope aurantia +barn spider, Araneus cavaticus +garden spider, Aranea diademata +black widow, Latrodectus mactans +tarantula +wolf spider, hunting spider +tick +centipede +black grouse +ptarmigan +ruffed grouse, partridge, Bonasa umbellus +prairie chicken, prairie grouse, prairie fowl +peacock +quail +partridge +African grey, African gray, Psittacus erithacus +macaw +sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita +lorikeet +coucal +bee eater +hornbill +hummingbird +jacamar +toucan +drake +red-breasted merganser, Mergus serrator +goose +black swan, Cygnus atratus +tusker +echidna, spiny anteater, anteater +platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus +wallaby, brush kangaroo +koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus +wombat +jellyfish +sea anemone, anemone +brain coral +flatworm, platyhelminth +nematode, nematode worm, roundworm +conch +snail +slug +sea slug, nudibranch +chiton, coat-of-mail shell, sea cradle, polyplacophore +chambered nautilus, pearly nautilus, nautilus +Dungeness crab, Cancer magister +rock crab, Cancer irroratus +fiddler crab +king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica +American lobster, Northern lobster, Maine lobster, Homarus americanus +spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish +crayfish, crawfish, crawdad, crawdaddy +hermit crab +isopod +white stork, Ciconia ciconia +black stork, Ciconia nigra +spoonbill +flamingo +little blue heron, Egretta caerulea +American egret, great white heron, Egretta albus +bittern +crane +limpkin, Aramus pictus +European gallinule, Porphyrio porphyrio +American coot, marsh hen, mud hen, water hen, Fulica americana +bustard +ruddy turnstone, Arenaria interpres +red-backed sandpiper, dunlin, Erolia alpina +redshank, Tringa totanus +dowitcher +oystercatcher, oyster catcher +pelican +king penguin, Aptenodytes patagonica +albatross, mollymawk +grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus +killer whale, killer, orca, grampus, sea wolf, Orcinus orca +dugong, Dugong dugon +sea lion +Chihuahua +Japanese spaniel +Maltese dog, Maltese terrier, Maltese +Pekinese, Pekingese, Peke +Shih-Tzu +Blenheim spaniel +papillon +toy terrier +Rhodesian ridgeback +Afghan hound, Afghan +basset, basset hound +beagle +bloodhound, sleuthhound +bluetick +black-and-tan coonhound +Walker hound, Walker foxhound +English foxhound +redbone +borzoi, Russian wolfhound +Irish wolfhound +Italian greyhound +whippet +Ibizan hound, Ibizan Podenco +Norwegian elkhound, elkhound +otterhound, otter hound +Saluki, gazelle hound +Scottish deerhound, deerhound +Weimaraner +Staffordshire bullterrier, Staffordshire bull terrier +American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier +Bedlington terrier +Border terrier +Kerry blue terrier +Irish terrier +Norfolk terrier +Norwich terrier +Yorkshire terrier +wire-haired fox terrier +Lakeland terrier +Sealyham terrier, Sealyham +Airedale, Airedale terrier +cairn, cairn terrier +Australian terrier +Dandie Dinmont, Dandie Dinmont terrier +Boston bull, Boston terrier +miniature schnauzer +giant schnauzer +standard schnauzer +Scotch terrier, Scottish terrier, Scottie +Tibetan terrier, chrysanthemum dog +silky terrier, Sydney silky +soft-coated wheaten terrier +West Highland white terrier +Lhasa, Lhasa apso +flat-coated retriever +curly-coated retriever +golden retriever +Labrador retriever +Chesapeake Bay retriever +German short-haired pointer +vizsla, Hungarian pointer +English setter +Irish setter, red setter +Gordon setter +Brittany spaniel +clumber, clumber spaniel +English springer, English springer spaniel +Welsh springer spaniel +cocker spaniel, English cocker spaniel, cocker +Sussex spaniel +Irish water spaniel +kuvasz +schipperke +groenendael +malinois +briard +kelpie +komondor +Old English sheepdog, bobtail +Shetland sheepdog, Shetland sheep dog, Shetland +collie +Border collie +Bouvier des Flandres, Bouviers des Flandres +Rottweiler +German shepherd, German shepherd dog, German police dog, alsatian +Doberman, Doberman pinscher +miniature pinscher +Greater Swiss Mountain dog +Bernese mountain dog +Appenzeller +EntleBucher +boxer +bull mastiff +Tibetan mastiff +French bulldog +Great Dane +Saint Bernard, St Bernard +Eskimo dog, husky +malamute, malemute, Alaskan malamute +Siberian husky +dalmatian, coach dog, carriage dog +affenpinscher, monkey pinscher, monkey dog +basenji +pug, pug-dog +Leonberg +Newfoundland, Newfoundland dog +Great Pyrenees +Samoyed, Samoyede +Pomeranian +chow, chow chow +keeshond +Brabancon griffon +Pembroke, Pembroke Welsh corgi +Cardigan, Cardigan Welsh corgi +toy poodle +miniature poodle +standard poodle +Mexican hairless +timber wolf, grey wolf, gray wolf, Canis lupus +white wolf, Arctic wolf, Canis lupus tundrarum +red wolf, maned wolf, Canis rufus, Canis niger +coyote, prairie wolf, brush wolf, Canis latrans +dingo, warrigal, warragal, Canis dingo +dhole, Cuon alpinus +African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus +hyena, hyaena +red fox, Vulpes vulpes +kit fox, Vulpes macrotis +Arctic fox, white fox, Alopex lagopus +grey fox, gray fox, Urocyon cinereoargenteus +tabby, tabby cat +tiger cat +Persian cat +Siamese cat, Siamese +Egyptian cat +cougar, puma, catamount, mountain lion, painter, panther, Felis concolor +lynx, catamount +leopard, Panthera pardus +snow leopard, ounce, Panthera uncia +jaguar, panther, Panthera onca, Felis onca +lion, king of beasts, Panthera leo +tiger, Panthera tigris +cheetah, chetah, Acinonyx jubatus +brown bear, bruin, Ursus arctos +American black bear, black bear, Ursus americanus, Euarctos americanus +ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus +sloth bear, Melursus ursinus, Ursus ursinus +mongoose +meerkat, mierkat +tiger beetle +ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle +ground beetle, carabid beetle +long-horned beetle, longicorn, longicorn beetle +leaf beetle, chrysomelid +dung beetle +rhinoceros beetle +weevil +fly +bee +ant, emmet, pismire +grasshopper, hopper +cricket +walking stick, walkingstick, stick insect +cockroach, roach +mantis, mantid +cicada, cicala +leafhopper +lacewing, lacewing fly +dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk +damselfly +admiral +ringlet, ringlet butterfly +monarch, monarch butterfly, milkweed butterfly, Danaus plexippus +cabbage butterfly +sulphur butterfly, sulfur butterfly +lycaenid, lycaenid butterfly +starfish, sea star +sea urchin +sea cucumber, holothurian +wood rabbit, cottontail, cottontail rabbit +hare +Angora, Angora rabbit +hamster +porcupine, hedgehog +fox squirrel, eastern fox squirrel, Sciurus niger +marmot +beaver +guinea pig, Cavia cobaya +sorrel +zebra +hog, pig, grunter, squealer, Sus scrofa +wild boar, boar, Sus scrofa +warthog +hippopotamus, hippo, river horse, Hippopotamus amphibius +ox +water buffalo, water ox, Asiatic buffalo, Bubalus bubalis +bison +ram, tup +bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis +ibex, Capra ibex +hartebeest +impala, Aepyceros melampus +gazelle +Arabian camel, dromedary, Camelus dromedarius +llama +weasel +mink +polecat, fitch, foulmart, foumart, Mustela putorius +black-footed ferret, ferret, Mustela nigripes +otter +skunk, polecat, wood pussy +badger +armadillo +three-toed sloth, ai, Bradypus tridactylus +orangutan, orang, orangutang, Pongo pygmaeus +gorilla, Gorilla gorilla +chimpanzee, chimp, Pan troglodytes +gibbon, Hylobates lar +siamang, Hylobates syndactylus, Symphalangus syndactylus +guenon, guenon monkey +patas, hussar monkey, Erythrocebus patas +baboon +macaque +langur +colobus, colobus monkey +proboscis monkey, Nasalis larvatus +marmoset +capuchin, ringtail, Cebus capucinus +howler monkey, howler +titi, titi monkey +spider monkey, Ateles geoffroyi +squirrel monkey, Saimiri sciureus +Madagascar cat, ring-tailed lemur, Lemur catta +indri, indris, Indri indri, Indri brevicaudatus +Indian elephant, Elephas maximus +African elephant, Loxodonta africana +lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens +giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca +barracouta, snoek +eel +coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch +rock beauty, Holocanthus tricolor +anemone fish +sturgeon +gar, garfish, garpike, billfish, Lepisosteus osseus +lionfish +puffer, pufferfish, blowfish, globefish +abacus +abaya +academic gown, academic robe, judge's robe +accordion, piano accordion, squeeze box +acoustic guitar +aircraft carrier, carrier, flattop, attack aircraft carrier +airliner +airship, dirigible +altar +ambulance +amphibian, amphibious vehicle +analog clock +apiary, bee house +apron +ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin +assault rifle, assault gun +backpack, back pack, knapsack, packsack, rucksack, haversack +bakery, bakeshop, bakehouse +balance beam, beam +balloon +ballpoint, ballpoint pen, ballpen, Biro +Band Aid +banjo +bannister, banister, balustrade, balusters, handrail +barbell +barber chair +barbershop +barn +barometer +barrel, cask +barrow, garden cart, lawn cart, wheelbarrow +baseball +basketball +bassinet +bassoon +bathing cap, swimming cap +bath towel +bathtub, bathing tub, bath, tub +beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon +beacon, lighthouse, beacon light, pharos +beaker +bearskin, busby, shako +beer bottle +beer glass +bell cote, bell cot +bib +bicycle-built-for-two, tandem bicycle, tandem +bikini, two-piece +binder, ring-binder +binoculars, field glasses, opera glasses +birdhouse +boathouse +bobsled, bobsleigh, bob +bolo tie, bolo, bola tie, bola +bonnet, poke bonnet +bookcase +bookshop, bookstore, bookstall +bottlecap +bow +bow tie, bow-tie, bowtie +brass, memorial tablet, plaque +brassiere, bra, bandeau +breakwater, groin, groyne, mole, bulwark, seawall, jetty +breastplate, aegis, egis +broom +bucket, pail +buckle +bulletproof vest +bullet train, bullet +butcher shop, meat market +cab, hack, taxi, taxicab +caldron, cauldron +candle, taper, wax light +cannon +canoe +can opener, tin opener +cardigan +car mirror +carousel, carrousel, merry-go-round, roundabout, whirligig +carpenter's kit, tool kit +carton +car wheel +cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM +cassette +cassette player +castle +catamaran +CD player +cello, violoncello +cellular telephone, cellular phone, cellphone, cell, mobile phone +chain +chainlink fence +chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour +chain saw, chainsaw +chest +chiffonier, commode +chime, bell, gong +china cabinet, china closet +Christmas stocking +church, church building +cinema, movie theater, movie theatre, movie house, picture palace +cleaver, meat cleaver, chopper +cliff dwelling +cloak +clog, geta, patten, sabot +cocktail shaker +coffee mug +coffeepot +coil, spiral, volute, whorl, helix +combination lock +computer keyboard, keypad +confectionery, confectionary, candy store +container ship, containership, container vessel +convertible +corkscrew, bottle screw +cornet, horn, trumpet, trump +cowboy boot +cowboy hat, ten-gallon hat +cradle +crane +crash helmet +crate +crib, cot +Crock Pot +croquet ball +crutch +cuirass +dam, dike, dyke +desk +desktop computer +dial telephone, dial phone +diaper, nappy, napkin +digital clock +digital watch +dining table, board +dishrag, dishcloth +dishwasher, dish washer, dishwashing machine +disk brake, disc brake +dock, dockage, docking facility +dogsled, dog sled, dog sleigh +dome +doormat, welcome mat +drilling platform, offshore rig +drum, membranophone, tympan +drumstick +dumbbell +Dutch oven +electric fan, blower +electric guitar +electric locomotive +entertainment center +envelope +espresso maker +face powder +feather boa, boa +file, file cabinet, filing cabinet +fireboat +fire engine, fire truck +fire screen, fireguard +flagpole, flagstaff +flute, transverse flute +folding chair +football helmet +forklift +fountain +fountain pen +four-poster +freight car +French horn, horn +frying pan, frypan, skillet +fur coat +garbage truck, dustcart +gasmask, respirator, gas helmet +gas pump, gasoline pump, petrol pump, island dispenser +goblet +go-kart +golf ball +golfcart, golf cart +gondola +gong, tam-tam +gown +grand piano, grand +greenhouse, nursery, glasshouse +grille, radiator grille +grocery store, grocery, food market, market +guillotine +hair slide +hair spray +half track +hammer +hamper +hand blower, blow dryer, blow drier, hair dryer, hair drier +hand-held computer, hand-held microcomputer +handkerchief, hankie, hanky, hankey +hard disc, hard disk, fixed disk +harmonica, mouth organ, harp, mouth harp +harp +harvester, reaper +hatchet +holster +home theater, home theatre +honeycomb +hook, claw +hoopskirt, crinoline +horizontal bar, high bar +horse cart, horse-cart +hourglass +iPod +iron, smoothing iron +jack-o'-lantern +jean, blue jean, denim +jeep, landrover +jersey, T-shirt, tee shirt +jigsaw puzzle +jinrikisha, ricksha, rickshaw +joystick +kimono +knee pad +knot +lab coat, laboratory coat +ladle +lampshade, lamp shade +laptop, laptop computer +lawn mower, mower +lens cap, lens cover +letter opener, paper knife, paperknife +library +lifeboat +lighter, light, igniter, ignitor +limousine, limo +liner, ocean liner +lipstick, lip rouge +Loafer +lotion +loudspeaker, speaker, speaker unit, loudspeaker system, speaker system +loupe, jeweler's loupe +lumbermill, sawmill +magnetic compass +mailbag, postbag +mailbox, letter box +maillot +maillot, tank suit +manhole cover +maraca +marimba, xylophone +mask +matchstick +maypole +maze, labyrinth +measuring cup +medicine chest, medicine cabinet +megalith, megalithic structure +microphone, mike +microwave, microwave oven +military uniform +milk can +minibus +miniskirt, mini +minivan +missile +mitten +mixing bowl +mobile home, manufactured home +Model T +modem +monastery +monitor +moped +mortar +mortarboard +mosque +mosquito net +motor scooter, scooter +mountain bike, all-terrain bike, off-roader +mountain tent +mouse, computer mouse +mousetrap +moving van +muzzle +nail +neck brace +necklace +nipple +notebook, notebook computer +obelisk +oboe, hautboy, hautbois +ocarina, sweet potato +odometer, hodometer, mileometer, milometer +oil filter +organ, pipe organ +oscilloscope, scope, cathode-ray oscilloscope, CRO +overskirt +oxcart +oxygen mask +packet +paddle, boat paddle +paddlewheel, paddle wheel +padlock +paintbrush +pajama, pyjama, pj's, jammies +palace +panpipe, pandean pipe, syrinx +paper towel +parachute, chute +parallel bars, bars +park bench +parking meter +passenger car, coach, carriage +patio, terrace +pay-phone, pay-station +pedestal, plinth, footstall +pencil box, pencil case +pencil sharpener +perfume, essence +Petri dish +photocopier +pick, plectrum, plectron +pickelhaube +picket fence, paling +pickup, pickup truck +pier +piggy bank, penny bank +pill bottle +pillow +ping-pong ball +pinwheel +pirate, pirate ship +pitcher, ewer +plane, carpenter's plane, woodworking plane +planetarium +plastic bag +plate rack +plow, plough +plunger, plumber's helper +Polaroid camera, Polaroid Land camera +pole +police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria +poncho +pool table, billiard table, snooker table +pop bottle, soda bottle +pot, flowerpot +potter's wheel +power drill +prayer rug, prayer mat +printer +prison, prison house +projectile, missile +projector +puck, hockey puck +punching bag, punch bag, punching ball, punchball +purse +quill, quill pen +quilt, comforter, comfort, puff +racer, race car, racing car +racket, racquet +radiator +radio, wireless +radio telescope, radio reflector +rain barrel +recreational vehicle, RV, R.V. +reel +reflex camera +refrigerator, icebox +remote control, remote +restaurant, eating house, eating place, eatery +revolver, six-gun, six-shooter +rifle +rocking chair, rocker +rotisserie +rubber eraser, rubber, pencil eraser +rugby ball +rule, ruler +running shoe +safe +safety pin +saltshaker, salt shaker +sandal +sarong +sax, saxophone +scabbard +scale, weighing machine +school bus +schooner +scoreboard +screen, CRT screen +screw +screwdriver +seat belt, seatbelt +sewing machine +shield, buckler +shoe shop, shoe-shop, shoe store +shoji +shopping basket +shopping cart +shovel +shower cap +shower curtain +ski +ski mask +sleeping bag +slide rule, slipstick +sliding door +slot, one-armed bandit +snorkel +snowmobile +snowplow, snowplough +soap dispenser +soccer ball +sock +solar dish, solar collector, solar furnace +sombrero +soup bowl +space bar +space heater +space shuttle +spatula +speedboat +spider web, spider's web +spindle +sports car, sport car +spotlight, spot +stage +steam locomotive +steel arch bridge +steel drum +stethoscope +stole +stone wall +stopwatch, stop watch +stove +strainer +streetcar, tram, tramcar, trolley, trolley car +stretcher +studio couch, day bed +stupa, tope +submarine, pigboat, sub, U-boat +suit, suit of clothes +sundial +sunglass +sunglasses, dark glasses, shades +sunscreen, sunblock, sun blocker +suspension bridge +swab, swob, mop +sweatshirt +swimming trunks, bathing trunks +swing +switch, electric switch, electrical switch +syringe +table lamp +tank, army tank, armored combat vehicle, armoured combat vehicle +tape player +teapot +teddy, teddy bear +television, television system +tennis ball +thatch, thatched roof +theater curtain, theatre curtain +thimble +thresher, thrasher, threshing machine +throne +tile roof +toaster +tobacco shop, tobacconist shop, tobacconist +toilet seat +torch +totem pole +tow truck, tow car, wrecker +toyshop +tractor +trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi +tray +trench coat +tricycle, trike, velocipede +trimaran +tripod +triumphal arch +trolleybus, trolley coach, trackless trolley +trombone +tub, vat +turnstile +typewriter keyboard +umbrella +unicycle, monocycle +upright, upright piano +vacuum, vacuum cleaner +vase +vault +velvet +vending machine +vestment +viaduct +violin, fiddle +volleyball +waffle iron +wall clock +wallet, billfold, notecase, pocketbook +wardrobe, closet, press +warplane, military plane +washbasin, handbasin, washbowl, lavabo, wash-hand basin +washer, automatic washer, washing machine +water bottle +water jug +water tower +whiskey jug +whistle +wig +window screen +window shade +Windsor tie +wine bottle +wing +wok +wooden spoon +wool, woolen, woollen +worm fence, snake fence, snake-rail fence, Virginia fence +wreck +yawl +yurt +web site, website, internet site, site +comic book +crossword puzzle, crossword +street sign +traffic light, traffic signal, stoplight +book jacket, dust cover, dust jacket, dust wrapper +menu +plate +guacamole +consomme +hot pot, hotpot +trifle +ice cream, icecream +ice lolly, lolly, lollipop, popsicle +French loaf +bagel, beigel +pretzel +cheeseburger +hotdog, hot dog, red hot +mashed potato +head cabbage +broccoli +cauliflower +zucchini, courgette +spaghetti squash +acorn squash +butternut squash +cucumber, cuke +artichoke, globe artichoke +bell pepper +cardoon +mushroom +Granny Smith +strawberry +orange +lemon +fig +pineapple, ananas +banana +jackfruit, jak, jack +custard apple +pomegranate +hay +carbonara +chocolate sauce, chocolate syrup +dough +meat loaf, meatloaf +pizza, pizza pie +potpie +burrito +red wine +espresso +cup +eggnog +alp +bubble +cliff, drop, drop-off +coral reef +geyser +lakeside, lakeshore +promontory, headland, head, foreland +sandbar, sand bar +seashore, coast, seacoast, sea-coast +valley, vale +volcano +ballplayer, baseball player +groom, bridegroom +scuba diver +rapeseed +daisy +yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum +corn +acorn +hip, rose hip, rosehip +buckeye, horse chestnut, conker +coral fungus +agaric +gyromitra +stinkhorn, carrion fungus +earthstar +hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa +bolete +ear, spike, capitulum +toilet tissue, toilet paper, bathroom tissue \ No newline at end of file diff --git a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.json b/how-to/sample_app/exe/tinyyolov2_onnx/deploy.json deleted file mode 100644 index 1c495d7..0000000 --- a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.json +++ /dev/null @@ -1,94 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_pad", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_pad", - "hash": "57a762f485b8c0d1" - }, - "inputs": [ - [ - 1, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_13", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_13" - }, - "inputs": [ - [ - 2, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 3, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16", - "float16", - "float16" - ] - ], - "storage_id": [ - "list_int", - [0, 1, 2, 1] - ], - "shape": [ - "list_shape", - [ - [1, 3, 416, 416], - [1, 512, 13, 13], - [1, 512, 14, 14], - [1, 125, 13, 13] - ] - ] - }, - "node_row_ptr": [0, 1, 2, 3, 4] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.params b/how-to/sample_app/exe/tinyyolov2_onnx/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.so b/how-to/sample_app/exe/tinyyolov2_onnx/deploy.so deleted file mode 100644 index 8f036f9..0000000 Binary files a/how-to/sample_app/exe/tinyyolov2_onnx/deploy.so and /dev/null differ diff --git a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.json b/how-to/sample_app/exe/tinyyolov3_onnx/deploy.json deleted file mode 100644 index 0e048f4..0000000 --- a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.json +++ /dev/null @@ -1,220 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "2", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_pad", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_pad", - "hash": "a27055500ece8397" - }, - "inputs": [ - [ - 1, - 0, - 0 - ], - [ - 1, - 1, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "2", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 2, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_concatenate", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_concatenate", - "hash": "a049934a28c8210c" - }, - "inputs": [ - [ - 3, - 0, - 0 - ], - [ - 3, - 1, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_mera_drp_10", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_mera_drp_10" - }, - "inputs": [ - [ - 4, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_concatenate_1", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_concatenate_1", - "hash": "2cafd345ebba88e0" - }, - "inputs": [ - [ - 1, - 0, - 0 - ], - [ - 1, - 1, - 0 - ], - [ - 5, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_mera_drp_9", - "attrs": { - "num_outputs": "2", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_mera_drp_9" - }, - "inputs": [ - [ - 3, - 0, - 0 - ], - [ - 6, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 7, - 0, - 0 - ], - [ - 7, - 1, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16" - ] - ], - "storage_id": [ - "list_int", - [ - 0, - 1, - 2, - 3, - 4, - 5, - 3, - 5, - 3, - 1, - 2 - ] - ], - "shape": [ - "list_shape", - [ - [1, 3, 416, 416], - [1, 512, 13, 13], - [1, 256, 26, 26], - [1, 512, 14, 14], - [1, 255, 13, 13], - [1, 256, 13, 13], - [1, 256, 13, 13], - [1, 128, 26, 26], - [1, 384, 26, 26], - [1, 255, 13, 13], - [1, 255, 26, 26] - ] - ] - }, - "node_row_ptr": [0, 1, 3, 4, 6, 7, 8, 9, 11] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.params b/how-to/sample_app/exe/tinyyolov3_onnx/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.so b/how-to/sample_app/exe/tinyyolov3_onnx/deploy.so deleted file mode 100644 index 827d91d..0000000 Binary files a/how-to/sample_app/exe/tinyyolov3_onnx/deploy.so and /dev/null differ diff --git a/how-to/sample_app/exe/ultraface_onnx/deploy.json b/how-to/sample_app/exe/ultraface_onnx/deploy.json deleted file mode 100644 index c235470..0000000 --- a/how-to/sample_app/exe/ultraface_onnx/deploy.json +++ /dev/null @@ -1,601 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "7", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p205", - "inputs": [] - }, - { - "op": "null", - "name": "p206", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "b95f84f24d058f30" - }, - "inputs": [ - [ - 1, - 3, - 0 - ], - [ - 2, - 0, - 0 - ], - [ - 3, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_transpose_reshape_transpose_reshape_transpose_reshape_transpose_reshape_co_7082943185696203540_", - "attrs": { - "num_outputs": "1", - "num_inputs": "8", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_transpose_reshape_transpose_reshape_transpose_reshape_transpose_reshape_co_7082943185696203540_", - "hash": "90271b873a127f3e" - }, - "inputs": [ - [ - 1, - 0, - 0 - ], - [ - 1, - 1, - 0 - ], - [ - 1, - 2, - 0 - ], - [ - 1, - 3, - 0 - ], - [ - 1, - 4, - 0 - ], - [ - 1, - 5, - 0 - ], - [ - 1, - 6, - 0 - ], - [ - 4, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_max", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_max", - "hash": "abb0a1bb844b58d8" - }, - "inputs": [ - [ - 5, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_subtract_exp", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_subtract_exp", - "hash": "7002a19d00c08c59" - }, - "inputs": [ - [ - 5, - 0, - 0 - ], - [ - 6, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_sum", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_sum", - "hash": "d80ba26ba63ccc27" - }, - "inputs": [ - [ - 7, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_divide", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_divide", - "hash": "57ed172b6b136a04" - }, - "inputs": [ - [ - 7, - 0, - 0 - ], - [ - 8, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p207", - "inputs": [] - }, - { - "op": "null", - "name": "p208", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_nn_conv2d_add_1", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_nn_conv2d_add_1", - "out_layout": "", - "kernel_layout": "OIHW", - "data_layout": "NCHW", - "hash": "74e1a6cf1f792a73" - }, - "inputs": [ - [ - 1, - 3, - 0 - ], - [ - 10, - 0, - 0 - ], - [ - 11, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p209", - "inputs": [] - }, - { - "op": "null", - "name": "p210", - "inputs": [] - }, - { - "op": "null", - "name": "p211", - "inputs": [] - }, - { - "op": "null", - "name": "p212", - "inputs": [] - }, - { - "op": "null", - "name": "p213", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_transpose_reshape_transpose_reshape_transpose_reshape_transpose_reshape_co_6915023575542332385_", - "attrs": { - "num_outputs": "1", - "num_inputs": "13", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_transpose_reshape_transpose_reshape_transpose_reshape_transpose_reshape_co_6915023575542332385_", - "hash": "3f552f2a73dca16c" - }, - "inputs": [ - [ - 1, - 0, - 0 - ], - [ - 1, - 1, - 0 - ], - [ - 1, - 2, - 0 - ], - [ - 1, - 3, - 0 - ], - [ - 1, - 4, - 0 - ], - [ - 1, - 5, - 0 - ], - [ - 1, - 6, - 0 - ], - [ - 12, - 0, - 0 - ], - [ - 13, - 0, - 0 - ], - [ - 14, - 0, - 0 - ], - [ - 15, - 0, - 0 - ], - [ - 16, - 0, - 0 - ], - [ - 17, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_strided_slice", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_strided_slice", - "hash": "cdc5ef2defa2c2ed" - }, - "inputs": [ - [ - 18, - 0, - 0 - ] - ] - }, - { - "op": "null", - "name": "p214", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_strided_slice_divide", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_strided_slice_divide", - "hash": "1b14862e338de6fb" - }, - "inputs": [ - [ - 18, - 0, - 0 - ], - [ - 20, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_66", - "attrs": { - "num_outputs": "1", - "num_inputs": "2", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_66" - }, - "inputs": [ - [ - 19, - 0, - 0 - ], - [ - 21, - 0, - 0 - ] - ] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_fused_subtract_concatenate", - "attrs": { - "num_outputs": "1", - "num_inputs": "3", - "flatten_data": "0", - "func_name": "tvmgen_default_fused_subtract_concatenate", - "hash": "e65843ac812d8a82" - }, - "inputs": [ - [ - 19, - 0, - 0 - ], - [ - 21, - 0, - 0 - ], - [ - 22, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [ - 0, - 2, - 3, - 10, - 11, - 13, - 14, - 15, - 16, - 17, - 20 - ], - "heads": [ - [ - 9, - 0, - 0 - ], - [ - 23, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16", - "float16" - ] - ], - "storage_id": [ - "list_int", - [ - 0, - 1, - 2, - 3, - 4, - 5, - 6, - 7, - 8, - 9, - 10, - 11, - 12, - 13, - 12, - 11, - 14, - 15, - 10, - 16, - 17, - 18, - 19, - 20, - 13, - 5, - 21, - 1, - 13, - 4 - ] - ], - "shape": [ - "list_shape", - [ - [1, 3, 240, 320], - [1, 6, 30, 40], - [1, 4, 15, 20], - [1, 4, 8, 10], - [1, 256, 4, 5], - [1, 12, 30, 40], - [1, 8, 15, 20], - [1, 8, 8, 10], - [6, 256, 3, 3], - [1, 6, 1, 1], - [1, 6, 4, 5], - [1, 4420, 2], - [1, 4420, 1], - [1, 4420, 2], - [1, 4420, 1], - [1, 4420, 2], - [12, 256, 3, 3], - [1, 12, 1, 1], - [1, 12, 4, 5], - [], - [1, 4420, 2], - [1, 4420, 2], - [], - [1, 4420, 2], - [1, 4420, 4], - [1, 4420, 2], - [], - [1, 4420, 2], - [1, 4420, 2], - [1, 4420, 4] - ] - ] - }, - "node_row_ptr": [ - 0, - 1, - 8, - 9, - 10, - 11, - 12, - 13, - 14, - 15, - 16, - 17, - 18, - 19, - 20, - 21, - 22, - 23, - 24, - 25, - 26, - 27, - 28, - 29, - 30 - ] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/ultraface_onnx/deploy.params b/how-to/sample_app/exe/ultraface_onnx/deploy.params deleted file mode 100644 index 877a655..0000000 Binary files a/how-to/sample_app/exe/ultraface_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/ultraface_onnx/deploy.so b/how-to/sample_app/exe/ultraface_onnx/deploy.so deleted file mode 100644 index e858cdb..0000000 Binary files a/how-to/sample_app/exe/ultraface_onnx/deploy.so and /dev/null differ diff --git a/how-to/sample_app/exe/yolov2_onnx/deploy.json b/how-to/sample_app/exe/yolov2_onnx/deploy.json deleted file mode 100644 index b1e18a8..0000000 --- a/how-to/sample_app/exe/yolov2_onnx/deploy.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "1", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 1, - 0, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16" - ] - ], - "storage_id": [ - "list_int", - [0, 1] - ], - "shape": [ - "list_shape", - [ - [1, 3, 416, 416], - [1, 125, 13, 13] - ] - ] - }, - "node_row_ptr": [0, 1, 2] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/yolov2_onnx/deploy.params b/how-to/sample_app/exe/yolov2_onnx/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/yolov2_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/yolov2_onnx/deploy.so b/how-to/sample_app/exe/yolov2_onnx/deploy.so deleted file mode 100644 index 268d6ca..0000000 --- a/how-to/sample_app/exe/yolov2_onnx/deploy.so +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:adddc94ba26da324055c8a6813e2ebd0b0e2dc545736b368c32f60c83afa6186 -size 204023592 diff --git a/how-to/sample_app/exe/yolov3_onnx/deploy.json b/how-to/sample_app/exe/yolov3_onnx/deploy.json deleted file mode 100644 index e4bb697..0000000 --- a/how-to/sample_app/exe/yolov3_onnx/deploy.json +++ /dev/null @@ -1,69 +0,0 @@ -{ - "nodes": [ - { - "op": "null", - "name": "input1", - "inputs": [] - }, - { - "op": "tvm_op", - "name": "tvmgen_default_tvmgen_default_mera_drp_0", - "attrs": { - "num_outputs": "3", - "num_inputs": "1", - "flatten_data": "0", - "func_name": "tvmgen_default_tvmgen_default_mera_drp_0" - }, - "inputs": [ - [ - 0, - 0, - 0 - ] - ] - } - ], - "arg_nodes": [0], - "heads": [ - [ - 1, - 0, - 0 - ], - [ - 1, - 1, - 0 - ], - [ - 1, - 2, - 0 - ] - ], - "attrs": { - "dltype": [ - "list_str", - [ - "float32", - "float16", - "float16", - "float16" - ] - ], - "storage_id": [ - "list_int", - [0, 1, 2, 3] - ], - "shape": [ - "list_shape", - [ - [1, 3, 416, 416], - [1, 255, 13, 13], - [1, 255, 26, 26], - [1, 255, 52, 52] - ] - ] - }, - "node_row_ptr": [0, 1, 4] -} \ No newline at end of file diff --git a/how-to/sample_app/exe/yolov3_onnx/deploy.params b/how-to/sample_app/exe/yolov3_onnx/deploy.params deleted file mode 100644 index 1011def..0000000 Binary files a/how-to/sample_app/exe/yolov3_onnx/deploy.params and /dev/null differ diff --git a/how-to/sample_app/exe/yolov3_onnx/deploy.so b/how-to/sample_app/exe/yolov3_onnx/deploy.so deleted file mode 100644 index d421c5f..0000000 --- a/how-to/sample_app/exe/yolov3_onnx/deploy.so +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:929530440514592ebecc7201ee2f3f52202a5be102238577c4ef23919c7963ee -size 249124648 diff --git a/how-to/sample_app/src/camera/camera.cpp b/how-to/sample_app/src/camera/camera.cpp index 9b19cf1..7051f33 100644 --- a/how-to/sample_app/src/camera/camera.cpp +++ b/how-to/sample_app/src/camera/camera.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : camera.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/camera/camera.h b/how-to/sample_app/src/camera/camera.h index 6f88d3c..20ad760 100644 --- a/how-to/sample_app/src/camera/camera.h +++ b/how-to/sample_app/src/camera/camera.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : camera.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/camera/define.h b/how-to/sample_app/src/camera/define.h index 307cc20..946c510 100644 --- a/how-to/sample_app/src/camera/define.h +++ b/how-to/sample_app/src/camera/define.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : define.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/app_message.h b/how-to/sample_app/src/command/app_message.h new file mode 100755 index 0000000..206314c --- /dev/null +++ b/how-to/sample_app/src/command/app_message.h @@ -0,0 +1,59 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : app_message.h +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +#pragma once + +#ifndef APPMESSAGE_H +#define APPMESSAGE_H + +/***************************************** +* Includes +******************************************/ + +#include "../includes.h" +#include "command_base.h" +using namespace std; +class AppMessage :public CommandBase +{ +public: + AppMessage() :CommandBase("app_message") {} + virtual ~AppMessage() {} + virtual string CreateRequest(void) + { + return CommandCreateHelper::SerializeCommandBody(*this); + } + template + void save(Archive& archive) const + { + // regist params + archive( + CEREAL_NVP(message) + ); + } +public: + std::string message; +}; + +#endif diff --git a/how-to/sample_app/src/command/bbox_t.h b/how-to/sample_app/src/command/bbox_t.h index 9b44065..2091192 100644 --- a/how-to/sample_app/src/command/bbox_t.h +++ b/how-to/sample_app/src/command/bbox_t.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : bbox_t.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/camera_image.h b/how-to/sample_app/src/command/camera_image.h index ee38c72..047f20e 100644 --- a/how-to/sample_app/src/command/camera_image.h +++ b/how-to/sample_app/src/command/camera_image.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : camera_image.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/change_model.h b/how-to/sample_app/src/command/change_model.h index 306b35e..f12c891 100644 --- a/how-to/sample_app/src/command/change_model.h +++ b/how-to/sample_app/src/command/change_model.h @@ -19,7 +19,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : change_model.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/classification.h b/how-to/sample_app/src/command/classification.h new file mode 100755 index 0000000..50b90a5 --- /dev/null +++ b/how-to/sample_app/src/command/classification.h @@ -0,0 +1,70 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : classification.h +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +#pragma once +#ifndef CLASSFICATION_H +#define CLASSFICATION_H +/***************************************** +* Includes +******************************************/ + +#include "predict_notify_base.h" +#include "classify_t.h" +using namespace std; +class Classification :public PredictNotifyBase +{ +public: + Classification() :PredictNotifyBase("classification") + { + + } + virtual ~Classification() {} + + virtual std::string CreateRequest(void) + { + return CommandCreateHelper::SerializeCommandBody(*this); + } + template + void save(Archive& archive) const + { + // regist parameters + archive( + CEREAL_NVP(img), + CEREAL_NVP(img_org_w), + CEREAL_NVP(img_org_h), + CEREAL_NVP(predict), + CEREAL_NVP(drp_time), + CEREAL_NVP(post_time), + CEREAL_NVP(pre_time) + ); + } + + + +public: + vector predict; +}; + +#endif diff --git a/how-to/sample_app/src/command/classify_t.h b/how-to/sample_app/src/command/classify_t.h new file mode 100755 index 0000000..7de28fc --- /dev/null +++ b/how-to/sample_app/src/command/classify_t.h @@ -0,0 +1,53 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : classify_t.h +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +#ifndef CLASSIFY_H +#define CLASSIFY_H + +/***************************************** +* Includes +******************************************/ +#include +#include +#include + +#include +#include + +struct classify_t { + string names; + float pred; + + template + void serialize(Archive& archive) + { + archive( + CEREAL_NVP(names), + CEREAL_NVP(pred) + ); + } +}; + +#endif diff --git a/how-to/sample_app/src/command/command_base.h b/how-to/sample_app/src/command/command_base.h index e43a60d..d8e0471 100644 --- a/how-to/sample_app/src/command/command_base.h +++ b/how-to/sample_app/src/command/command_base.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : command_base.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/command_create_helper.h b/how-to/sample_app/src/command/command_create_helper.h index 5038c15..88fd44e 100644 --- a/how-to/sample_app/src/command/command_create_helper.h +++ b/how-to/sample_app/src/command/command_create_helper.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : command_create_helper.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/cpu_usage.h b/how-to/sample_app/src/command/cpu_usage.h index b710fd3..7610741 100644 --- a/how-to/sample_app/src/command/cpu_usage.h +++ b/how-to/sample_app/src/command/cpu_usage.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : cpu_usage.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/object_detection.h b/how-to/sample_app/src/command/object_detection.h index bddbee8..5fd92e3 100644 --- a/how-to/sample_app/src/command/object_detection.h +++ b/how-to/sample_app/src/command/object_detection.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : object_detection.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/pose_detection.h b/how-to/sample_app/src/command/pose_detection.h index b9d65b3..7e87fbc 100644 --- a/how-to/sample_app/src/command/pose_detection.h +++ b/how-to/sample_app/src/command/pose_detection.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : pose_detection.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/command/predict_notify_base.h b/how-to/sample_app/src/command/predict_notify_base.h index 710540b..58a5af7 100644 --- a/how-to/sample_app/src/command/predict_notify_base.h +++ b/how-to/sample_app/src/command/predict_notify_base.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : predict_notify_base.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/image_converter.cpp b/how-to/sample_app/src/image_converter.cpp index 9fe2a47..964c0ed 100644 --- a/how-to/sample_app/src/image_converter.cpp +++ b/how-to/sample_app/src/image_converter.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : image_converter.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -54,7 +54,7 @@ void ImageConverter::compress_jpeg_turbo(uint8_t* input, uint32_t h; uint32_t w = width * BYTE_PER_PIX; - /* YUV(4:2:0) Buffer */ + /* YUV(4:2:2) Buffer */ vector all_array((height * width) + ((height * width) / BYTE_PER_PIX) + ((height * width) / BYTE_PER_PIX)); int32_t yoff = 0; int32_t uoff = height * width; @@ -71,7 +71,7 @@ void ImageConverter::compress_jpeg_turbo(uint8_t* input, Measuretime mm("YUV extract time"); for (h = 0; h < height; h++) { - /* align to 4:2:0 + /* align to 4:2:2 YYYYYYYYY YYYYYYYYY ..... diff --git a/how-to/sample_app/src/image_converter.h b/how-to/sample_app/src/image_converter.h index b2e9a1f..f0a6d40 100644 --- a/how-to/sample_app/src/image_converter.h +++ b/how-to/sample_app/src/image_converter.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : image_converter.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/includes.h b/how-to/sample_app/src/includes.h index 60476e5..3fd6eca 100644 --- a/how-to/sample_app/src/includes.h +++ b/how-to/sample_app/src/includes.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : includes.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/main.cpp b/how-to/sample_app/src/main.cpp index 2320852..e8b133a 100644 --- a/how-to/sample_app/src/main.cpp +++ b/how-to/sample_app/src/main.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : main.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/recognize/common/box.cpp b/how-to/sample_app/src/recognize/common/box.cpp index 68b2fca..02f64c7 100644 --- a/how-to/sample_app/src/recognize/common/box.cpp +++ b/how-to/sample_app/src/recognize/common/box.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : box.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/recognize/common/box.h b/how-to/sample_app/src/recognize/common/box.h index 13b1246..a18e127 100644 --- a/how-to/sample_app/src/recognize/common/box.h +++ b/how-to/sample_app/src/recognize/common/box.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : box.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/recognize/common/functions.h b/how-to/sample_app/src/recognize/common/functions.h index c9a98fe..36d205f 100644 --- a/how-to/sample_app/src/recognize/common/functions.h +++ b/how-to/sample_app/src/recognize/common/functions.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : functions.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -30,7 +30,7 @@ * Includes ******************************************/ #include "../../includes.h" -using namespace std; + class CommonFunc { public : @@ -42,7 +42,7 @@ public : */ static double sigmoid(double x) { - return 1.0 / (1.0 + exp(-x)); + return 1.0 / (1.0 + std::exp(-x)); } /** @@ -63,7 +63,7 @@ public : for (i = 0; i < num_class; i++) { - val[i] = (float)exp(val[i] - max_num); + val[i] = (float)std::exp(val[i] - max_num); sum += val[i]; } @@ -73,6 +73,40 @@ public : } return; } + + + /** + * @brief load_label_file + * @details Load label list text file and return the label list that contains the label. + * @param label_file_name filename of label list. must be in txt format + * @return vector list contains labels. empty if error occured + */ + static std::vector load_label_file(std::string label_file_name) + { + std::vector list = {}; + std::vector empty = {}; + std::ifstream infile(label_file_name); + + if (!infile.is_open()) + { + std::cerr << "[ERROR] Failed to open label list txt : " << label_file_name << std::endl; + return list; + } + + + std::string line = ""; + while (getline(infile, line)) + { + list.push_back(line); + if (infile.fail()) + { + std::cerr << "[ERROR] Failed to read label list txt : " << label_file_name << std::endl; + return empty; + } + } + + return list; + } }; #endif diff --git a/how-to/sample_app/src/recognize/common/yolo_common.h b/how-to/sample_app/src/recognize/common/object_detection.h old mode 100644 new mode 100755 similarity index 83% rename from how-to/sample_app/src/recognize/common/yolo_common.h rename to how-to/sample_app/src/recognize/common/object_detection.h index 0b1b1f8..ae97155 --- a/how-to/sample_app/src/recognize/common/yolo_common.h +++ b/how-to/sample_app/src/recognize/common/object_detection.h @@ -17,22 +17,22 @@ * Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. ***********************************************************************************************************************/ /*********************************************************************************************************************** -* File Name : yolo_common.h -* Version : 1.0.2 +* File Name : object_detection.h +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ #pragma once -#ifndef YOLO_COMMON_H -#define YOLO_COMMON_H +#ifndef COMMON_OBJECT_DETECTION_H +#define COMMON_OBJECT_DETECTION_H /***************************************** * Includes ******************************************/ #include "../../includes.h" -class YoloCommon +class ObjectDetectionFunc { public: /** @@ -74,37 +74,6 @@ class YoloCommon return prev_layer_num + b * (numClass + 5) * num * num + y * num + x; } - /** - * @brief load_label_file - * @details Load label list text file and return the label list that contains the label. - * @param label_file_name filename of label list. must be in txt format - * @return vector list contains labels. empty if error occured - */ - static vector load_label_file(string label_file_name) - { - vector list = {}; - vector empty = {}; - ifstream infile(label_file_name); - - if (!infile.is_open()) - { - return list; - } - std::cout << "Yolo Label list file opened! : " << label_file_name << std::endl; - - string line = ""; - while (getline(infile, line)) - { - list.push_back(line); - if (infile.fail()) - { - return empty; - } - } - - return list; - } - /** * @brief print_boxes * @details Function to printout details of detected bounding boxes to standard output @@ -148,4 +117,4 @@ class YoloCommon printf(" Bounding Box Count : %d\n", real_count); } }; -#endif // !YOLO_COMMON_H +#endif // !COMMON_OBJECT_DETECTION_H diff --git a/how-to/sample_app/src/recognize/common/pos.h b/how-to/sample_app/src/recognize/common/pos.h index 31f9697..f116fd9 100644 --- a/how-to/sample_app/src/recognize/common/pos.h +++ b/how-to/sample_app/src/recognize/common/pos.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : pos.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/recognize/common/recognize_define.h b/how-to/sample_app/src/recognize/common/recognize_define.h index e1bd99b..f70bbc5 100644 --- a/how-to/sample_app/src/recognize/common/recognize_define.h +++ b/how-to/sample_app/src/recognize/common/recognize_define.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : recognize_define.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -36,6 +36,7 @@ #define TENTATIVE /*Will be deleted in the future */ /*Define Mode*/ +#define MODE_TVM_UNKNOWN (0b00000000) /*For DRP-AI TVM, value must be more than or equal to 0b10000000(128) */ #define MODE_TVM_MIN (0b10000000) /*For DRP-AI TVM DRP-AI mode, LSB must be 0. */ @@ -54,5 +55,11 @@ #define MODE_TVM_HRNET_CPU (0b10001011) #define MODE_TVM_ULTRAFACE_DRPAI (0b10001100) #define MODE_TVM_ULTRAFACE_CPU (0b10001101) +#define MODE_TVM_HRNETV2_DRPAI (0b10001110) +#define MODE_TVM_HRNETV2_CPU (0b10001111) +#define MODE_TVM_GOOGLENET_DRPAI (0b10010000) +#define MODE_TVM_GOOGLENET_CPU (0b10010001) +#define MODE_TVM_EMOTIONFP_DRPAI (0b10010010) +#define MODE_TVM_EMOTIONFP_CPU (0b10010011) #endif // !RECOGNIE_DEFINE_H diff --git a/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.cpp b/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.cpp index e0ad861..bae868e 100644 --- a/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.cpp +++ b/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_cpu_deeppose.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -39,17 +39,33 @@ TVM_DeepPose_CPU::TVM_DeepPose_CPU() : image_resize.create(TVM_MODEL_IN_H, TVM_MODEL_IN_W, CV_8UC3); image_float.create(TVM_MODEL_IN_H, TVM_MODEL_IN_W, CV_32FC3); } - /** - * @brief inf_pre_process_cpu - * @details Run pre-processing using CPU + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address * @param input_data Input data pointer - * @param output_buf Output data buffer pointer holder + * @param width new input data width. + * @param height new input data width. + * @param addr Physical address of input data buffer + * @param out output_buf Output data buffer pointer holder + * @param out buf_size Output data buffer size holder * @return int32_t success:0 error: != 0 */ -int32_t TVM_DeepPose_CPU::inf_pre_process_cpu(uint8_t* input_data, float** output_buf) +int32_t TVM_DeepPose_CPU:: inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { - pre_process_cpu(input_data, output_buf); + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + image.release(); + image.create(_capture_h, _capture_w, CV_8UC2); + image_rgb.release(); + image_rgb.create(_capture_h, _capture_w, CV_8UC3); + } + + pre_process_cpu(input_data, arg); return 0; } /** @@ -83,7 +99,7 @@ int32_t TVM_DeepPose_CPU::print_result() /** * @brief get_command * @details Prepare the command to send via HTTP - * @return shared_ptr Pose detection print_result + * @return shared_ptr result data */ shared_ptr TVM_DeepPose_CPU::get_command() { diff --git a/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.h b/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.h index aabd6dd..4c23ce8 100644 --- a/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.h +++ b/how-to/sample_app/src/recognize/deeppose/tvm_cpu_deeppose.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_cpu_deeppose.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -39,17 +39,9 @@ class TVM_DeepPose_CPU : public IRecognizeModel { private: - -#ifdef MODEL_VGA constexpr static string_view TVM_MODEL_DIR = "face_deeppose_cpu"; constexpr static int32_t TVM_DRPAI_IN_WIDTH = (640); constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (480); -#else - constexpr static string_view TVM_MODEL_DIR = "face_deeppose_cpu_fhd"; - constexpr static int32_t TVM_DRPAI_IN_WIDTH = (1920); - constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (1080); - -#endif /*DeepPose Related*/ constexpr static string_view MODEL_NAME = "DRP-AI TVM DeepPose (CPU)"; @@ -64,7 +56,7 @@ class TVM_DeepPose_CPU : public IRecognizeModel public: TVM_DeepPose_CPU(); - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf); + virtual int32_t inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); virtual int32_t inf_post_process(float* arg); virtual shared_ptr get_command(); virtual int32_t print_result(); diff --git a/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.cpp b/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.cpp index 90cc59d..7d3082c 100644 --- a/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.cpp +++ b/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_deeppose.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -50,32 +50,33 @@ TVM_DeepPose_DRPAI::TVM_DeepPose_DRPAI() : in_param.cof_mul[1]= 1/(stdev[1]*255);//0.017507; in_param.cof_mul[2]= 1/(stdev[2]*255);//0.01742919; } - /** - * @brief inf_pre_process_drpai - * @details Run pre-processing using Pre-processing Runtime (DRP-AI) + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address + * @param input_data Input data pointer + * @param width new input data width. + * @param height new input data width. * @param addr Physical address of input data buffer * @param out output_buf Output data buffer pointer holder * @param out buf_size Output data buffer size holder * @return int32_t success:0 error: != 0 */ -int32_t TVM_DeepPose_DRPAI::inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size) +int32_t TVM_DeepPose_DRPAI:: inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + in_param.pre_in_shape_w = _capture_w; + in_param.pre_in_shape_h = _capture_h; + } + pre_process_drpai(addr, arg, buf_size); return 0; } -/** - * @brief inf_pre_process_cpu - * @details Run pre-processing using CPU - * @param input_data Input data pointer - * @param out output_buf Output data buffer pointer holder - * @return int32_t success:0 error: != 0 - */ -int32_t TVM_DeepPose_DRPAI:: inf_pre_process_cpu(uint8_t* input_data, float** output_buf) -{ - /*Do nothing*/ - return 0; -} /** * @brief inf_post_process * @details Run post-processing @@ -107,7 +108,7 @@ int32_t TVM_DeepPose_DRPAI::print_result() /** * @brief get_command * @details Prepare the command to send via HTTP - * @return shared_ptr Pose detection result data + * @return shared_ptr result data */ shared_ptr TVM_DeepPose_DRPAI::get_command() { diff --git a/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.h b/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.h index b7172b5..47db909 100644 --- a/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.h +++ b/how-to/sample_app/src/recognize/deeppose/tvm_drpai_deeppose.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_deeppose.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -39,16 +39,9 @@ class TVM_DeepPose_DRPAI : public IRecognizeModel { private: - -#ifdef MODEL_VGA constexpr static string_view TVM_MODEL_DIR = "face_deeppose_pt"; constexpr static int32_t TVM_DRPAI_IN_WIDTH = (640); constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (480); -#else - constexpr static string_view TVM_MODEL_DIR = "face_deeppose_pt_fhd"; - constexpr static int32_t TVM_DRPAI_IN_WIDTH = (1920); - constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (1080); -#endif /*DeepPose Related*/ constexpr static string_view MODEL_NAME = "DRP-AI TVM DeepPose (DRP-AI)"; @@ -63,8 +56,8 @@ class TVM_DeepPose_DRPAI : public IRecognizeModel public: TVM_DeepPose_DRPAI(); - virtual int32_t inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size); - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf); + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); virtual int32_t inf_post_process(float* arg); virtual shared_ptr get_command(); virtual int32_t print_result(); diff --git a/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.cpp b/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.cpp new file mode 100755 index 0000000..cb7c108 --- /dev/null +++ b/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.cpp @@ -0,0 +1,208 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : tvm_drpai_emotionfp.cpp +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +/***************************************** +* Includes +******************************************/ +#include "tvm_drpai_emotionfp.h" +TVM_EmotionFP_DRPAI::TVM_EmotionFP_DRPAI() : + IRecognizeModel(TVM_MODEL_OUT_C, + TVM_MODEL_DIR.data(), MODEL_NAME.data(), + TVM_DRPAI_IN_WIDTH, TVM_DRPAI_IN_HEIGHT, TVM_DRPAI_IN_CHANNEL, + TVM_MODEL_IN_W, TVM_MODEL_IN_H, TVM_MODEL_IN_C, MODE_TVM_EMOTIONFP_DRPAI) +{ + /*Initialize opencv container*/ + image.create(TVM_DRPAI_IN_HEIGHT, TVM_DRPAI_IN_WIDTH, CV_8UC2); + image_gray.create(TVM_DRPAI_IN_HEIGHT, TVM_DRPAI_IN_WIDTH, CV_8UC1); + image_resize.create(TVM_MODEL_IN_H, TVM_MODEL_IN_W, CV_8UC1); +} + +/** + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address + * @param input_data Input data pointer + * @param width new input data width. + * @param height new input data width. + * @param addr Physical address of input data buffer + * @param out output_buf Output data buffer pointer holder + * @param out buf_size Output data buffer size holder + * @return int32_t success:0 error: != 0 + */ +int32_t TVM_EmotionFP_DRPAI:: inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) +{ + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + image.release(); + image.create(_capture_h, _capture_w, CV_8UC2); + image_gray.release(); + image_gray.create(_capture_h, _capture_w, CV_8UC1); + } + + pre_process_cpu(input_data, arg, buf_size); + return 0; +} +/** + * @brief inf_post_process + * @details Run post-processing + * @param arg Inference output data pointer + * @return int32_t success:0 error: != 0 + */ +int32_t TVM_EmotionFP_DRPAI::inf_post_process(float* arg) +{ + postproc_result.clear(); + post_process(postproc_result, arg); + return 0; +} +/** + * @brief print_result + * @details Print AI result on console + * @return int32_t success:0 error: != 0 + */ +int32_t TVM_EmotionFP_DRPAI::print_result() +{ + float x, y, w, h; + uint32_t i = 0; + detection det; + std::string name; + for (i = 0;i Result data + */ +shared_ptr TVM_EmotionFP_DRPAI::get_command() +{ + ObjectDetection* ret = new ObjectDetection(); + detection det; + bbox_t dat; + /*Prepare the command*/ + for (int32_t i = 0;ipredict.push_back(dat); + } + /*Clear overall vectors.*/ + overall_result_prob.clear(); + overall_result_class.clear(); + return shared_ptr(move(ret)); +} + +/** + * @brief pre_process_cpu + * @details implementation pre process for OpenCV + * @param input_data input data buffer + * @param out output_buf output data buffer pointer holder + * @param out buf_size Output data buffer size holder + * @return int8_t success:0 error: != 0 + */ +int8_t TVM_EmotionFP_DRPAI::pre_process_cpu(uint8_t* input_data, float** output_buf, uint32_t* buf_size) +{ + float val = 0; + chw.clear(); + /*Loop variant*/ + int32_t c = 0; + int32_t y = 0; + int32_t x = 0; + + /*Load input image to opencv container*/ + image = cv::Mat(_capture_h, _capture_w, CV_8UC2, (void*)input_data); + /*Color conversion*/ + cv::cvtColor(image, image_gray, cv::COLOR_YUV2GRAY_YUYV); + /*Resize*/ + cv::resize(image_gray, image_resize, cv::Size(_model_w, _model_h), 0, 0, cv::INTER_AREA); + /*Cast to float and Transpose*/ + for (c = 0; c < _model_c ; c++) + { + for (y = 0; y < _model_h ; y++) + { + for (x = 0; x < _model_w ; x++) + { + val = (float) image_resize.at(y, x); + chw.push_back(val); + } + } + } + /*Copy output pointer to output_buf*/ + *output_buf = chw.data(); + return 0; +} + +/** + * @brief post_process + * @details implementation post process + * @param result reference to store the classification result + * @param floatarr DRP-AI result + * @return int8_t success:0 error: != 0 + */ +int8_t TVM_EmotionFP_DRPAI::post_process(std::map& result, float* floatarr) +{ + int32_t i = 0; + /* Post-processing */ + CommonFunc::softmax(floatarr, num_class); + /* Sort the score */ + for (i = 0; i < num_class; i++) + { + result[floatarr[i]] = i; + } + + /*Store the top result into overall vectors.*/ + i = 0; + for (reverse_iterator it = result.rbegin(); it != result.rend(); it++) + { + if (i >= TOP_NUM) break; + overall_result_prob.push_back(it->first); + overall_result_class.push_back(it->second); + i++; + } + return 0; +} + diff --git a/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.h b/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.h new file mode 100755 index 0000000..2e81a75 --- /dev/null +++ b/how-to/sample_app/src/recognize/emotionfp/tvm_drpai_emotionfp.h @@ -0,0 +1,96 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : tvm_drpai_emotionfp.h +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +#pragma once + +#ifndef DRP_TVM_MODEL_EMOTIONFP_H +#define DRP_TVM_MODEL_EMOTIONFP_H + +/***************************************** +* Includes +******************************************/ +#include "../irecognize_model.h" +#include "../../includes.h" +#include "../common/functions.h" +#include "../command/classification.h" +#include "../command/object_detection.h" +#include "opencv2/opencv.hpp" + +class TVM_EmotionFP_DRPAI : public IRecognizeModel +{ +private: + constexpr static string_view TVM_MODEL_DIR = "emotion_fp_onnx"; + constexpr static int32_t TVM_DRPAI_IN_WIDTH = (640); + constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (480); + constexpr static int32_t TVM_DRPAI_IN_CHANNEL = (2); + + /*GoogleNet Related*/ + constexpr static string_view MODEL_NAME = "DRP-AI TVM Emotion FER Plus (DRP-AI)"; + /*DRP-AI Input image information*/ + constexpr static int32_t TVM_MODEL_IN_C = (1); + constexpr static int32_t TVM_MODEL_IN_W = (64); + constexpr static int32_t TVM_MODEL_IN_H = (64); + /*DRP-AI Output information*/ + constexpr static int32_t TVM_MODEL_OUT_C = (8); + constexpr static int32_t TOP_NUM = (1); + +public: + TVM_EmotionFP_DRPAI(); + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); + virtual int32_t inf_post_process(float* arg); + virtual shared_ptr get_command(); + virtual int32_t print_result(); + +private: + int8_t pre_process_cpu(uint8_t* input_data, float** output_buf, uint32_t* buf_size); + int8_t post_process(std::map& result, float* floatarr); + +private: + int32_t num_class = TVM_MODEL_OUT_C; + std::vector emotion_table = + { + "neutral", + "happiness", + "surprise", + "sadness", + "anger", + "disgust", + "fear", + "contempt" + }; + /* OpenCV Mat for pre-processing */ + cv::Mat image; + cv::Mat image_gray; + cv::Mat image_resize; + /* Variables required for Transpose&Normalize in pre-processing */ + std::vector chw; + + /* Post-processing result */ + std::map postproc_result; + std::vector overall_result_prob; + std::vector overall_result_class; +}; +#endif //DRP_TVM_MODEL_EMOTIONFP_H diff --git a/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.cpp b/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.cpp new file mode 100755 index 0000000..b16a92d --- /dev/null +++ b/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.cpp @@ -0,0 +1,200 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : tvm_drpai_googlenet.cpp +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +/***************************************** +* Includes +******************************************/ +#include "tvm_drpai_googlenet.h" +TVM_GoogleNet_DRPAI::TVM_GoogleNet_DRPAI() : + IRecognizeModel(0, + TVM_MODEL_DIR.data(), MODEL_NAME.data(), + TVM_DRPAI_IN_WIDTH, TVM_DRPAI_IN_HEIGHT, TVM_DRPAI_IN_CHANNEL, + TVM_MODEL_IN_W, TVM_MODEL_IN_H, TVM_MODEL_IN_C, MODE_TVM_GOOGLENET_DRPAI) +{ + preruntime.Load(pre_dir); + /*Define pre-processing parameter*/ + in_param.pre_in_shape_w = TVM_DRPAI_IN_WIDTH; + in_param.pre_in_shape_h = TVM_DRPAI_IN_HEIGHT; + in_param.pre_in_format = INPUT_YUYV; + in_param.resize_w = TVM_MODEL_IN_W; + in_param.resize_h = TVM_MODEL_IN_H; + in_param.resize_alg = ALG_BILINEAR; + /*Compute normalize coefficient, cof_add/cof_mul for DRP-AI from mean/scale */ + in_param.cof_add[0]= mean[0]; + in_param.cof_add[1]= mean[1]; + in_param.cof_add[2]= mean[2]; + in_param.cof_mul[0]= scale[0]; + in_param.cof_mul[1]= scale[1]; + in_param.cof_mul[2]= scale[2]; + + /*Load label for GoogleNet */ + label_file_map = CommonFunc::load_label_file(LABEL_LIST.data()); + if (label_file_map.empty()) + { + std::cerr << "[ERROR] Failed to load label file: "< TOP_NUM) break; + printf(" Top %d [%5.1f%%] : [%s]\n", result_cnt, it->first * 100, label_file_map[it->second].c_str()); + } + return 0; +} +/** + * @brief get_command + * @details Prepare the command to send via HTTP + * @return shared_ptr Result data + */ +shared_ptr TVM_GoogleNet_DRPAI::get_command() +{ + Classification* ret = new Classification(); + int32_t cnt=0; + classify_t dat; + for (reverse_iterator it = postproc_result.rbegin(); it != postproc_result.rend(); it++) + { + if (cnt == TOP_NUM)break; + cnt++; + dat.names = label_file_map[it->second]; + dat.pred = it->first * 100; + ret->predict.push_back(dat); + } + + return shared_ptr(move(ret)); +} + +/** + * @brief pre_process_drpai + * @details implementation pre process using Pre-processing Runtime. + * @param addr Physical address of input data buffer + * @param out output_buf Output data buffer pointer holder + * @param out buf_size Output data buffer size holder + * @return int8_t success:0 error: != 0 + */ +int8_t TVM_GoogleNet_DRPAI::pre_process_drpai(uint32_t addr, float** output_buf, uint32_t* buf_size) +{ + in_param.pre_in_addr = (uintptr_t) addr; + /*Run pre-processing*/ + preruntime.Pre(&in_param, output_buf, buf_size); + +#ifdef TENTATIVE + /*RGB to BGR*/ + /*Run by CPU since currently DRP-AI Pre-processing Runtime does not support BGR output*/ + int32_t i = 0; + int32_t j = 0; + int32_t w = TVM_MODEL_IN_W; + int32_t h = TVM_MODEL_IN_H; + int32_t c = TVM_MODEL_IN_C; + float tmp_val = 0; + for (i = 0; i < h; i++) + { + for (j = 0; j < w; j++) + { + /*Store R value in tmp_val*/ + tmp_val = (*output_buf)[i*w + j]; + /*R -> B value*/ + (*output_buf)[i*w + j] = (*output_buf)[2*h*w + i*w + j]; + /*B -> R value in tmp_val*/ + (*output_buf)[2*h*w + i*w + j] = tmp_val; + } + } +#endif + return 0; +} + + +/** + * @brief post_process + * @details implementation post process + * @param result reference to store the classification result + * @param floatarr DRP-AI result + * @return int8_t success:0 error: != 0 + */ +int8_t TVM_GoogleNet_DRPAI::post_process(std::map& result, float* floatarr) +{ + int32_t i = 0; + + /* Post-processing */ + /* Note that softmax has been done in ONNX inference. */ + /* Sort the score */ + for (i = 0; i < num_class; i++) + { + result[floatarr[i]] = i; + } + + return 0; +} + diff --git a/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.h b/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.h new file mode 100755 index 0000000..1f15557 --- /dev/null +++ b/how-to/sample_app/src/recognize/googlenet/tvm_drpai_googlenet.h @@ -0,0 +1,82 @@ +/*********************************************************************************************************************** +* DISCLAIMER +* This software is supplied by Renesas Electronics Corporation and is only intended for use with Renesas products. No +* other uses are authorized. This software is owned by Renesas Electronics Corporation and is protected under all +* applicable laws, including copyright laws. +* THIS SOFTWARE IS PROVIDED "AS IS" AND RENESAS MAKES NO WARRANTIES REGARDING +* THIS SOFTWARE, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ALL SUCH WARRANTIES ARE EXPRESSLY DISCLAIMED. TO THE MAXIMUM +* EXTENT PERMITTED NOT PROHIBITED BY LAW, NEITHER RENESAS ELECTRONICS CORPORATION NOR ANY OF ITS AFFILIATED COMPANIES +* SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES FOR ANY REASON RELATED TO THIS +* SOFTWARE, EVEN IF RENESAS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +* Renesas reserves the right, without notice, to make changes to this software and to discontinue the availability of +* this software. By using this software, you agree to the additional terms and conditions found by accessing the +* following link: +* http://www.renesas.com/disclaimer +* +* Copyright (C) 2022 Renesas Electronics Corporation. All rights reserved. +***********************************************************************************************************************/ +/*********************************************************************************************************************** +* File Name : tvm_drpai_googlenet.h +* Version : 1.0.3 +* Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version +* *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. +***********************************************************************************************************************/ + +#pragma once + +#ifndef DRP_TVM_MODEL_GOOGLENET_H +#define DRP_TVM_MODEL_GOOGLENET_H + +/***************************************** +* Includes +******************************************/ +#include "../irecognize_model.h" +#include "../../includes.h" +#include "../common/functions.h" +#include "../common/PreRuntime.h" +#include "../command/classification.h" + +class TVM_GoogleNet_DRPAI : public IRecognizeModel +{ +private: + constexpr static string_view TVM_MODEL_DIR = "googlenet_onnx"; + constexpr static int32_t TVM_DRPAI_IN_WIDTH = (640); + constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (480); + constexpr static int32_t TVM_DRPAI_IN_CHANNEL = (2); + + /*GoogleNet Related*/ + constexpr static string_view MODEL_NAME = "DRP-AI TVM GoogleNet (DRP-AI)"; + constexpr static string_view LABEL_LIST = "synset_words_imagenet.txt"; + /*DRP-AI Input image information*/ + constexpr static int32_t TVM_MODEL_IN_C = (3); + constexpr static int32_t TVM_MODEL_IN_W = (224); + constexpr static int32_t TVM_MODEL_IN_H = (224); + constexpr static int32_t TOP_NUM = (5); + +public: + TVM_GoogleNet_DRPAI(); + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); + virtual int32_t inf_post_process(float* arg); + virtual shared_ptr get_command(); + virtual int32_t print_result(); + +private: + int8_t pre_process_drpai(uint32_t addr, float** output_buf, uint32_t* buf_size); + int8_t post_process(std::map& result, float* floatarr); + +private: + /* Pre-processing Runtime variables for pre-processing */ + PreRuntime preruntime; + s_preproc_param_t in_param; + const std::string pre_dir = "preprocess_tvm_v2ma"; + float mean[3] = { -123.68, -116.779, -103.939 }; + float scale[3] = { 1.0, 1.0, 1.0 }; + std::vector label_file_map; + int32_t num_class; + + /* Post-processing result */ + std::map postproc_result; +}; +#endif //DRP_TVM_MODEL_GOOGLENET_H diff --git a/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.cpp b/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.cpp index 216a63c..d12195d 100644 --- a/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.cpp +++ b/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_hrnet.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -27,20 +27,14 @@ * Includes ******************************************/ #include "tvm_drpai_hrnet.h" -TVM_HRNET_DRPAI::TVM_HRNET_DRPAI() : - IRecognizeModel(NUM_INF_OUT*sizeof(float), +TVM_HRNET_DRPAI::TVM_HRNET_DRPAI(uint8_t id) : + IRecognizeModel(0, TVM_MODEL_DIR.data(), MODEL_NAME.data(), TVM_DRPAI_IN_WIDTH, TVM_DRPAI_IN_HEIGHT, TVM_DRPAI_IN_CHANNEL, - TVM_MODEL_IN_W, TVM_MODEL_IN_H, TVM_MODEL_IN_C, MODE_TVM_HRNET_DRPAI) + TVM_MODEL_IN_W, TVM_MODEL_IN_H, TVM_MODEL_IN_C, id) { preruntime.Load(pre_dir); - /*Define pre-processing parameter*/ - in_param.pre_in_shape_w = PRE_CROPPED_IMAGE_WIDTH; - in_param.pre_in_shape_h = PRE_CROPPED_IMAGE_HEIGHT; - in_param.pre_in_format = INPUT_YUYV; - in_param.resize_w = TVM_MODEL_IN_W; - in_param.resize_h = TVM_MODEL_IN_H; - in_param.resize_alg = ALG_BILINEAR; + /*Compute normalize coefficient, cof_add/cof_mul for DRP-AI from mean/std */ in_param.cof_add[0]= -255*mean[0];//-123.675; in_param.cof_add[1]= -255*mean[1];//-116.28; @@ -49,6 +43,81 @@ TVM_HRNET_DRPAI::TVM_HRNET_DRPAI() : in_param.cof_mul[1]= 1/(stdev[1]*255);//0.017507; in_param.cof_mul[2]= 1/(stdev[2]*255);//0.01742919; + if (id == MODE_TVM_HRNET_DRPAI) + { + model_dir = TVM_MODEL_DIR; + model_name = MODEL_NAME; + _model_w = TVM_MODEL_IN_W; + _model_h = TVM_MODEL_IN_H; + _model_c = TVM_MODEL_IN_C; + std::cout << "DRP-AI TVM HRNet model" << std::endl; + + pre_cropped_image_left = CROPPED_IMAGE_LEFT; + pre_cropped_image_top = CROPPED_IMAGE_TOP; + pre_cropped_image_width = CROPPED_IMAGE_WIDTH; + pre_cropped_image_height = CROPPED_IMAGE_HEIGHT; + + /*Define pre-processing parameter*/ + in_param.pre_in_shape_w = pre_cropped_image_width; + in_param.pre_in_shape_h = pre_cropped_image_height; + in_param.pre_in_format = INPUT_YUYV; + in_param.resize_w = TVM_MODEL_IN_W; + in_param.resize_h = TVM_MODEL_IN_H; + in_param.resize_alg = ALG_BILINEAR; + + num_inf_out = NUM_OUTPUT_W * NUM_OUTPUT_H * NUM_OUTPUT_C; + + hrnet_num_output_c = NUM_OUTPUT_C; + hrnet_num_output_w = NUM_OUTPUT_W; + hrnet_num_output_h = NUM_OUTPUT_H; + hrnet_th_kpt = TH_KPT; + + hrnet_output_width = OUTPUT_WIDTH; + hrnet_output_left = OUTPUT_LEFT; + hrnet_output_adj_x = OUTPUT_ADJ_X; + hrnet_output_height = OUTPUT_HEIGHT; + hrnet_output_top = OUTPUT_TOP; + hrnet_output_adj_y = OUTPUT_ADJ_Y; + } + else if (id == MODE_TVM_HRNETV2_DRPAI) + { + model_dir = TVM_MODEL_DIR_V2; + model_name = MODEL_NAME_V2; + _model_w = TVM_MODEL_IN_W_V2; + _model_h = TVM_MODEL_IN_H_V2; + _model_c = TVM_MODEL_IN_C_V2; + std::cout << "DRP-AI TVM HRNetv2 model" << std::endl; + + pre_cropped_image_left = CROPPED_IMAGE_LEFT_V2; + pre_cropped_image_top = CROPPED_IMAGE_TOP_V2; + pre_cropped_image_width = CROPPED_IMAGE_WIDTH_V2; + pre_cropped_image_height = CROPPED_IMAGE_HEIGHT_V2; + + /*Define pre-processing parameter*/ + in_param.pre_in_shape_w = pre_cropped_image_width; + in_param.pre_in_shape_h = pre_cropped_image_height; + in_param.pre_in_format = INPUT_YUYV; + in_param.resize_w = TVM_MODEL_IN_W_V2; + in_param.resize_h = TVM_MODEL_IN_H_V2; + in_param.resize_alg = ALG_BILINEAR; + + num_inf_out = NUM_OUTPUT_W_V2 * NUM_OUTPUT_H_V2 * NUM_OUTPUT_C_V2; + + hrnet_num_output_c = NUM_OUTPUT_C_V2; + hrnet_num_output_w = NUM_OUTPUT_W_V2; + hrnet_num_output_h = NUM_OUTPUT_H_V2; + hrnet_th_kpt = TH_KPT_V2; + + hrnet_output_width = OUTPUT_WIDTH_V2; + hrnet_output_left = OUTPUT_LEFT_V2; + hrnet_output_adj_x = OUTPUT_ADJ_X_V2; + hrnet_output_height = OUTPUT_HEIGHT_V2; + hrnet_output_top = OUTPUT_TOP_V2; + hrnet_output_adj_y = OUTPUT_ADJ_Y_V2; + } + + outBuffSize = num_inf_out; +#ifdef TENTATIVE /* Obtain udmabuf memory area starting address */ int8_t fd = 0; char addr[1024]; @@ -73,7 +142,7 @@ TVM_HRNET_DRPAI::TVM_HRNET_DRPAI() : udmabuf_crop_addr &= 0xFFFFFFFF; /*Add capture buffer offset to udmabuf_crop_addr*/ udmabuf_crop_addr += TVM_DRPAI_IN_WIDTH*TVM_DRPAI_IN_HEIGHT*TVM_DRPAI_IN_CHANNEL*4; - size = PRE_CROPPED_IMAGE_WIDTH * PRE_CROPPED_IMAGE_HEIGHT * TVM_DRPAI_IN_CHANNEL; + size = pre_cropped_image_width * pre_cropped_image_height * TVM_DRPAI_IN_CHANNEL; /*Mmap udmabuf for cropped image*/ udmabuf_fd = open("/dev/udmabuf0", O_RDWR ); @@ -99,6 +168,7 @@ TVM_HRNET_DRPAI::TVM_HRNET_DRPAI() : crop_out_ptr[i] = 0; } } +#endif } #ifdef TENTATIVE TVM_HRNET_DRPAI::~TVM_HRNET_DRPAI() @@ -111,44 +181,30 @@ TVM_HRNET_DRPAI::~TVM_HRNET_DRPAI() } #endif /** - * @brief inf_pre_process_drpai - * @details Run pre-processing using Pre-processing Runtime (DRP-AI) - * @param addr Physical address of input data buffer - * @param out output_buf Output data buffer pointer holder - * @param out buf_size Output data buffer size holder - * @return int32_t success:0 error: != 0 - */ -int32_t TVM_HRNET_DRPAI::inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size) -{ - pre_process_drpai(addr, arg, buf_size); - return 0; -} -#ifdef TENTATIVE -/** - * @brief inf_pre_process_hrnet - * @details Run pre-processing using Pre-processing Runtime (DRP-AI) and CPU. Will be deleted in the future. + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address * @param input_data Input data pointer + * @param width new input data width. + * @param height new input data width. * @param addr Physical address of input data buffer * @param out output_buf Output data buffer pointer holder * @param out buf_size Output data buffer size holder * @return int32_t success:0 error: != 0 */ -int32_t TVM_HRNET_DRPAI::inf_pre_process_hrnet(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size) +int32_t TVM_HRNET_DRPAI::inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { - pre_process_hrnet(input_data, addr, arg, buf_size); - return 0; -} -#endif -/** - * @brief inf_pre_process_cpu - * @details Run pre-processing using CPU - * @param input_data Input data pointer - * @param out output_buf Output data buffer pointer holder - * @return int32_t success:0 error: != 0 - */ -int32_t TVM_HRNET_DRPAI:: inf_pre_process_cpu(uint8_t* input_data, float** output_buf) -{ - /*Do nothing*/ + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + in_param.pre_in_shape_w = _capture_w; + in_param.pre_in_shape_h = _capture_h; + } + + pre_process(input_data, addr, arg, buf_size); return 0; } /** @@ -182,14 +238,14 @@ int32_t TVM_HRNET_DRPAI::print_result() /** * @brief hrnet_offset * @details Get the offset number to access the HRNet attributes -* @param b = Number to indicate which region [0~17] +* @param b = Number to indicate which region [0~17] (HRNet) [0~21] (HRNetv2) * @param y = Number to indicate which region [0~64] -* @param x = Number to indicate which region [0~48] +* @param x = Number to indicate which region [0~48] (HRNet) [0~64] (HRNetv2) * @return int32_t offset to access the HRNet attributes. */ int32_t TVM_HRNET_DRPAI::hrnet_offset(int32_t b, int32_t y, int32_t x) { - return b * HRNET_NUM_OUTPUT_W * HRNET_NUM_OUTPUT_H + y * HRNET_NUM_OUTPUT_W + x; + return b * hrnet_num_output_w * hrnet_num_output_h + y * hrnet_num_output_w + x; } /** * @brief get_command @@ -206,29 +262,7 @@ shared_ptr TVM_HRNET_DRPAI::get_command() return shared_ptr(move(ret)); } /** - * @brief pre_process_drpai - * @details implementation pre process using Pre-processing Runtime. - * @param addr Physical address of input data buffer - * @param out output_buf Output data buffer pointer holder - * @param out buf_size Output data buffer size holder - * @return int8_t success:0 error: != 0 - */ -int8_t TVM_HRNET_DRPAI::pre_process_drpai(uint32_t addr, float** output_buf, uint32_t* buf_size) -{ - in_param.pre_in_addr = (uintptr_t) addr; - /*Run pre-processing*/ - preruntime.Pre(&in_param, output_buf, buf_size); - return 0; -} - -int8_t TVM_HRNET_DRPAI::pre_process_cpu(uint8_t* input_data, float** output_buf) -{ - return 0; -} - -#ifdef TENTATIVE -/** - * @brief pre_process_hrnet + * @brief pre_process * @details implementation pre process using Pre-processing Runtime and CPU. * @param input_data Input data pointer * @param addr Physical address of input data buffer @@ -236,15 +270,16 @@ int8_t TVM_HRNET_DRPAI::pre_process_cpu(uint8_t* input_data, float** output_buf) * @param out buf_size Output data buffer size holder * @return int8_t success:0 error: != 0 */ -int8_t TVM_HRNET_DRPAI::pre_process_hrnet(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size) +int8_t TVM_HRNET_DRPAI::pre_process(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size) { +#ifdef TENTATIVE uint8_t err_crop = 0; uint32_t x; uint32_t y; - uint32_t top = PRE_CROPPED_IMAGE_TOP; - uint32_t bottom = top + PRE_CROPPED_IMAGE_HEIGHT; - uint32_t left = PRE_CROPPED_IMAGE_LEFT / 2; - uint32_t right = left + PRE_CROPPED_IMAGE_WIDTH / 2; + uint32_t top = pre_cropped_image_top; + uint32_t bottom = top + pre_cropped_image_height; + uint32_t left = pre_cropped_image_left / 2; + uint32_t right = left + pre_cropped_image_width / 2; uint32_t index = 0; drpai_data_t drpai_data; for (y = top; y < bottom; y++) @@ -255,13 +290,12 @@ int8_t TVM_HRNET_DRPAI::pre_process_hrnet(uint8_t* input_data, uint32_t addr, fl index += YUY2_NUM_DATA; } } - +#endif in_param.pre_in_addr = (uintptr_t) udmabuf_crop_addr; /*Run pre-processing*/ preruntime.Pre(&in_param, arg, buf_size); return 0; } -#endif /** * @brief sign * @details Get the sign of the input value @@ -278,23 +312,23 @@ int8_t TVM_HRNET_DRPAI::sign(int32_t x) * @param result reference to store result * @param preds postproce result */ -void TVM_HRNET_DRPAI::coord_convert(vector &result, float preds[][3]) +void TVM_HRNET_DRPAI::coord_convert(vector &result, vector> &preds) { /* Render skeleton on image and print their details */ int32_t posx = 0; int32_t posy = 0; int8_t i = 0; result.clear(); - for (i = 0; i < HRNET_NUM_OUTPUT_C; i++) + for (i = 0; i < hrnet_num_output_c; i++) { /* 0.5 is round */ - posx = (int32_t)(preds[i][0] / HRNET_CROPPED_IMAGE_WIDTH * HRNET_OUTPUT_WIDTH + 0.5) + HRNET_OUTPUT_LEFT + HRNET_OUTPUT_ADJ_X; - posy = (int32_t)(preds[i][1] / HRNET_CROPPED_IMAGE_HEIGHT * HRNET_OUTPUT_HEIGHT + 0.5) + HRNET_OUTPUT_TOP + HRNET_OUTPUT_ADJ_Y; + posx = (int32_t)(preds.at(i).at(0) / HRNET_CROPPED_IMAGE_WIDTH * hrnet_output_width + 0.5) + hrnet_output_left + hrnet_output_adj_x; + posy = (int32_t)(preds.at(i).at(1) / HRNET_CROPPED_IMAGE_HEIGHT * hrnet_output_height + 0.5) + hrnet_output_top + hrnet_output_adj_y; pos_t p; p.X = posx; p.Y = posy; - p.preds = preds[i][2] * 100; + p.preds = preds.at(i).at(2) * 100; result.push_back(p); } return; @@ -322,17 +356,17 @@ int8_t TVM_HRNET_DRPAI::post_process(vector &result,float* floatarr) int8_t ind_y = -1; float max_val = -1; float scale_x, scale_y, coords_x, coords_y; - float hrnet_preds[HRNET_NUM_OUTPUT_C][3]; + vector> hrnet_preds(hrnet_num_output_c, vector(3)); - for (b = 0; b < HRNET_NUM_OUTPUT_C; b++) + for (b = 0; b < hrnet_num_output_c; b++) { float scale[] = { HRNET_CROPPED_IMAGE_WIDTH / 200.0, HRNET_CROPPED_IMAGE_HEIGHT / 200.0 }; ind_x = -1; ind_y = -1; max_val = -1; - for (y = 0; y < HRNET_NUM_OUTPUT_H; y++) + for (y = 0; y < hrnet_num_output_h; y++) { - for (x = 0; x < HRNET_NUM_OUTPUT_W; x++) + for (x = 0; x < hrnet_num_output_w; x++) { offs = hrnet_offset(b, y, x); if (floatarr[offs] > max_val) @@ -351,18 +385,18 @@ int8_t TVM_HRNET_DRPAI::post_process(vector &result,float* floatarr) lowest_kpt_score = 0; return -1 ; } - hrnet_preds[b][0] = float(ind_x); - hrnet_preds[b][1] = float(ind_y); - hrnet_preds[b][2] = max_val; + hrnet_preds.at(b).at(0) = float(ind_x); + hrnet_preds.at(b).at(1) = float(ind_y); + hrnet_preds.at(b).at(2) = max_val; offs = hrnet_offset(b, ind_y, ind_x); - if (ind_y > 1 && ind_y < HRNET_NUM_OUTPUT_H - 1) + if (ind_y > 1 && ind_y < hrnet_num_output_h - 1) { - if (ind_x > 1 && ind_x < HRNET_NUM_OUTPUT_W - 1) + if (ind_x > 1 && ind_x < hrnet_num_output_w - 1) { float diff_x = floatarr[offs + 1] - floatarr[offs - 1]; - float diff_y = floatarr[offs + HRNET_NUM_OUTPUT_W] - floatarr[offs - HRNET_NUM_OUTPUT_W]; - hrnet_preds[b][0] += sign(diff_x) * 0.25; - hrnet_preds[b][1] += sign(diff_y) * 0.25; + float diff_y = floatarr[offs + hrnet_num_output_w] - floatarr[offs - hrnet_num_output_w]; + hrnet_preds.at(b).at(0) += sign(diff_x) * 0.25; + hrnet_preds.at(b).at(1) += sign(diff_y) * 0.25; } } @@ -370,28 +404,28 @@ int8_t TVM_HRNET_DRPAI::post_process(vector &result,float* floatarr) scale[0] *= 200; scale[1] *= 200; /* udp (Unbiased Data Processing) = False */ - scale_x = scale[0] / (HRNET_NUM_OUTPUT_W); - scale_y = scale[1] / (HRNET_NUM_OUTPUT_H); - coords_x = hrnet_preds[b][0]; - coords_y = hrnet_preds[b][1]; - hrnet_preds[b][0] = coords_x * scale_x + center[0] - scale[0] * 0.5; - hrnet_preds[b][1] = coords_y * scale_y + center[1] - scale[1] * 0.5; + scale_x = scale[0] / (hrnet_num_output_w); + scale_y = scale[1] / (hrnet_num_output_h); + coords_x = hrnet_preds.at(b).at(0); + coords_y = hrnet_preds.at(b).at(1); + hrnet_preds.at(b).at(0) = coords_x * scale_x + center[0] - scale[0] * 0.5; + hrnet_preds.at(b).at(1) = coords_y * scale_y + center[1] - scale[1] * 0.5; } /* Clear the score in preparation for the update. */ lowest_kpt_score = 0; score = 1; - for (i = 0; i < HRNET_NUM_OUTPUT_C; i++) + for (i = 0; i < hrnet_num_output_c; i++) { /* Adopt the lowest score. */ - if (hrnet_preds[i][2] < score) + if (hrnet_preds.at(i).at(2) < score) { - score = hrnet_preds[i][2]; + score = hrnet_preds.at(i).at(2); } } /* Update the score for display thread. */ lowest_kpt_score = score; - if (HRNET_TH_KPT < lowest_kpt_score) + if (hrnet_th_kpt < lowest_kpt_score) { coord_convert(result, hrnet_preds); } diff --git a/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.h b/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.h index 97c63f2..45a8161 100644 --- a/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.h +++ b/how-to/sample_app/src/recognize/hrnet/tvm_drpai_hrnet.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_hrnet.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -43,53 +43,80 @@ class TVM_HRNET_DRPAI : public IRecognizeModel { private: - constexpr static string_view TVM_MODEL_DIR = "hrnet_onnx"; + /*Common*/ constexpr static int32_t TVM_DRPAI_IN_WIDTH = (640); constexpr static int32_t TVM_DRPAI_IN_HEIGHT = (480); /*HRNet Related*/ + constexpr static string_view TVM_MODEL_DIR = "hrnet_onnx"; constexpr static string_view MODEL_NAME = "DRP-AI TVM HRNet (DRP-AI)"; - constexpr static int32_t HRNET_NUM_OUTPUT_C = (17); - constexpr static int32_t HRNET_NUM_OUTPUT_W = (48); - constexpr static int32_t HRNET_NUM_OUTPUT_H = (64); - constexpr static int32_t NUM_INF_OUT = HRNET_NUM_OUTPUT_W * HRNET_NUM_OUTPUT_H * HRNET_NUM_OUTPUT_C; - constexpr static float HRNET_TH_KPT = (0.1f); + constexpr static int32_t NUM_OUTPUT_C = (17); + constexpr static int32_t NUM_OUTPUT_W = (48); + constexpr static int32_t NUM_OUTPUT_H = (64); + constexpr static float TH_KPT = (0.1f); + + /*HRNetV2 Related*/ + constexpr static string_view TVM_MODEL_DIR_V2 = "hrnetv2_pt"; + constexpr static string_view MODEL_NAME_V2 = "DRP-AI TVM HRNetV2 (DRP-AI)"; + constexpr static int32_t NUM_OUTPUT_C_V2 = (21); + constexpr static int32_t NUM_OUTPUT_W_V2 = (64); + constexpr static int32_t NUM_OUTPUT_H_V2 = (64); + constexpr static float TH_KPT_V2 = (0.15f); /*DRP-AI Input image information*/ + /*Common*/ constexpr static int32_t TVM_DRPAI_IN_CHANNEL = (2); + /*HRNet*/ constexpr static int32_t TVM_MODEL_IN_C = (3); constexpr static int32_t TVM_MODEL_IN_W = (192); constexpr static int32_t TVM_MODEL_IN_H = (256); + /*HRNetv2*/ + constexpr static int32_t TVM_MODEL_IN_C_V2 = (3); + constexpr static int32_t TVM_MODEL_IN_W_V2 = (256); + constexpr static int32_t TVM_MODEL_IN_H_V2 = (256); /*Cropping Image Related*/ - constexpr static float HRNET_CROPPED_IMAGE_WIDTH = (TVM_DRPAI_IN_WIDTH); - constexpr static float HRNET_CROPPED_IMAGE_HEIGHT = (TVM_DRPAI_IN_HEIGHT); - constexpr static int32_t PRE_CROPPED_IMAGE_LEFT = (184); //Only Even numbers can be set - constexpr static int32_t PRE_CROPPED_IMAGE_TOP = (0); - constexpr static int32_t PRE_CROPPED_IMAGE_WIDTH = (270); - constexpr static int32_t PRE_CROPPED_IMAGE_HEIGHT = (TVM_DRPAI_IN_HEIGHT); + /*Common*/ constexpr static int32_t YUY2_NUM_CHANNEL = (2); constexpr static int32_t YUY2_NUM_DATA = (4); - - /*HRNet Post Processing & Drawing Related*/ - constexpr static float HRNET_OUTPUT_LEFT = (276 * (TVM_DRPAI_IN_WIDTH / 960.0f)); - constexpr static float HRNET_OUTPUT_TOP = (0); - constexpr static float HRNET_OUTPUT_WIDTH = (405 * (TVM_DRPAI_IN_WIDTH / 960.0f)); - constexpr static float HRNET_OUTPUT_HEIGHT = (TVM_DRPAI_IN_HEIGHT); - constexpr static float HRNET_OUTPUT_ADJ_X = (2); - constexpr static float HRNET_OUTPUT_ADJ_Y = (0); + constexpr static float HRNET_CROPPED_IMAGE_WIDTH = (TVM_DRPAI_IN_WIDTH); + constexpr static float HRNET_CROPPED_IMAGE_HEIGHT = (TVM_DRPAI_IN_HEIGHT); + /*HRNet*/ + constexpr static int32_t CROPPED_IMAGE_LEFT = (184); //Only Even numbers can be set + constexpr static int32_t CROPPED_IMAGE_TOP = (0); + constexpr static int32_t CROPPED_IMAGE_WIDTH = (270); + constexpr static int32_t CROPPED_IMAGE_HEIGHT = (TVM_DRPAI_IN_HEIGHT); + /*HRNetv2*/ + constexpr static int32_t CROPPED_IMAGE_LEFT_V2 = (80); //Only Even numbers can be set + constexpr static int32_t CROPPED_IMAGE_TOP_V2 = (0); + constexpr static int32_t CROPPED_IMAGE_WIDTH_V2 = (480); + constexpr static int32_t CROPPED_IMAGE_HEIGHT_V2 = (TVM_DRPAI_IN_HEIGHT); + + /*Post Processing & Drawing Related*/ + /*HRNet*/ + constexpr static float OUTPUT_LEFT = (276 * (TVM_DRPAI_IN_WIDTH / 960.0f)); + constexpr static float OUTPUT_TOP = (0); + constexpr static float OUTPUT_WIDTH = (405 * (TVM_DRPAI_IN_WIDTH / 960.0f)); + constexpr static float OUTPUT_HEIGHT = (TVM_DRPAI_IN_HEIGHT); + constexpr static float OUTPUT_ADJ_X = (2); + constexpr static float OUTPUT_ADJ_Y = (0); + /*HRNetv2*/ + constexpr static float OUTPUT_LEFT_V2 = (120 * (TVM_DRPAI_IN_WIDTH / 960.0f)); + constexpr static float OUTPUT_TOP_V2 = (0); + constexpr static float OUTPUT_WIDTH_V2 = (720 * (TVM_DRPAI_IN_WIDTH / 960.0f)); + constexpr static float OUTPUT_HEIGHT_V2 = (TVM_DRPAI_IN_HEIGHT); + constexpr static float OUTPUT_ADJ_X_V2 = (2); + constexpr static float OUTPUT_ADJ_Y_V2 = (0); public: TVM_HRNET_DRPAI(); + TVM_HRNET_DRPAI(uint8_t id); #ifdef TENTATIVE ~TVM_HRNET_DRPAI(); #endif - virtual int32_t inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size); - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf); -#ifdef TENTATIVE - virtual int32_t inf_pre_process_hrnet(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size); -#endif + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); virtual int32_t inf_post_process(float* arg); virtual shared_ptr get_command(); virtual int32_t print_result(); @@ -97,11 +124,7 @@ class TVM_HRNET_DRPAI : public IRecognizeModel int32_t hrnet_offset(int32_t b, int32_t y, int32_t x); private: - int8_t pre_process_drpai(uint32_t addr, float** output_buf, uint32_t* buf_size); - int8_t pre_process_cpu(uint8_t* input_data, float** output_buf); -#ifdef TENTATIVE - int8_t pre_process_hrnet(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size); -#endif + int8_t pre_process(uint8_t* input_data, uint32_t addr, float** output_buf, uint32_t* buf_size); int8_t post_process(vector& result, float* floatarr); private: @@ -113,7 +136,7 @@ class TVM_HRNET_DRPAI : public IRecognizeModel float stdev[3] = { 0.229, 0.224, 0.225 }; int8_t sign(int32_t x); - void coord_convert(vector& result, float preds[][3]); + void coord_convert(vector& result, vector>& preds); #ifdef TENTATIVE int8_t udmabuf_fd = 0; uint8_t * crop_out_ptr; @@ -122,7 +145,25 @@ class TVM_HRNET_DRPAI : public IRecognizeModel #endif /* Post-processing result */ vector postproc_result; - + /* Number of DRP-AI output */ + uint32_t num_inf_out; + + int32_t pre_cropped_image_left; + int32_t pre_cropped_image_top; + int32_t pre_cropped_image_width; + int32_t pre_cropped_image_height; + + int32_t hrnet_num_output_c; + int32_t hrnet_num_output_w; + int32_t hrnet_num_output_h; + float hrnet_th_kpt; + + float hrnet_output_width; + float hrnet_output_left; + float hrnet_output_adj_x; + float hrnet_output_height; + float hrnet_output_top; + float hrnet_output_adj_y; }; #endif //DRP_TVM_MODEL_HRNET_H diff --git a/how-to/sample_app/src/recognize/irecognize_model.h b/how-to/sample_app/src/recognize/irecognize_model.h index cc4af45..6d45f17 100644 --- a/how-to/sample_app/src/recognize/irecognize_model.h +++ b/how-to/sample_app/src/recognize/irecognize_model.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : irecognize_model.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -33,6 +33,9 @@ #include "../includes.h" #include "../command/predict_notify_base.h" #include "common/recognize_define.h" + +#include "common/box.h" + using namespace std; class IRecognizeModel @@ -55,11 +58,7 @@ class IRecognizeModel std::cout << drpprefix << std::endl; } virtual ~IRecognizeModel() {} - virtual int32_t inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size) { return 0; } - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf) { return 0; } -#ifdef TENTATIVE - virtual int32_t inf_pre_process_hrnet(uint8_t* input_data, uint32_t addr, float** arg, uint32_t* buf_size) { return 0; } -#endif + virtual int32_t inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { return 0; } virtual int32_t inf_post_process(float* arg) { return 0; } virtual int32_t print_result() { return 0; } virtual shared_ptr get_command() { return NULL; } @@ -75,5 +74,7 @@ class IRecognizeModel int32_t _model_h; int32_t _model_c; uint8_t _id; + /* Only for pre face detection. post-processing result */ + std::vector detected_data; }; #endif diff --git a/how-to/sample_app/src/recognize/recognize_base.cpp b/how-to/sample_app/src/recognize/recognize_base.cpp index a0b2744..483d633 100644 --- a/how-to/sample_app/src/recognize/recognize_base.cpp +++ b/how-to/sample_app/src/recognize/recognize_base.cpp @@ -42,7 +42,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : recognize_base.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version ***********************************************************************************************************************/ @@ -96,6 +96,20 @@ void print_measure_log(string item, string log) printf("[MeasLog],%s,%s\n", item.c_str(), log.c_str()); } +/** + * @brief send_app_message + * @details send message to web client + * @param message to send + */ +void RecognizeBase::send_app_message(string message) +{ + AppMessage app_mes; + app_mes.message = message.c_str(); + printf("Send application message to web client.[%s]\n",message.c_str()); + _server->send_command(app_mes.CreateRequest()); +} + + /** * @brief float16_to_float32 * @details Function by Edgecortex. Convert uin16_t number into float value. @@ -132,10 +146,60 @@ int32_t RecognizeBase::initialize(IRecognizeModel* model) model_h = _model->_model_h; model_c = _model->_model_c; mode = _model->_id; + mode_2 = MODE_TVM_UNKNOWN; return 0; } +/** + * @brief initialize + * @details Initialization for recognize process that uses two models. + * @param model Model to be run first + * @param model2 Model to be run second + * @return int32_t success:0 error: != 0 + */ +int32_t RecognizeBase::initialize(IRecognizeModel* model, IRecognizeModel* model_2) +{ + std::cout << "############ INIT ############" << std::endl; + std::cout << "############ MODEL 1 ############" << std::endl; + _model = shared_ptr(move(model)); + + std::cout << "[INFO] Model :" << _model->model_name << std::endl; + std::cout << "[INFO] Directory :" << _model->model_dir << std::endl; + std::cout << "[INFO] outbuff :" << _model->outBuffSize << std::endl; + + _outBuffSize = _model->outBuffSize; + dir = _model->model_dir + "/"; + + cap_w = _model->_capture_w; + cap_h = _model->_capture_h; + cap_c = _model->_capture_c; + model_w = _model->_model_w; + model_h = _model->_model_h; + model_c = _model->_model_c; + mode = _model->_id; + + std::cout << "############ MODEL 2 ############" << std::endl; + _model_2 = shared_ptr(move(model_2)); + + + std::cout << "[INFO] Second Model :" << _model_2->model_name << std::endl; + std::cout << "[INFO] Second Model Directory :" << _model_2->model_dir << std::endl; + std::cout << "[INFO] Second Model outbuff :" << _model_2->outBuffSize << std::endl; + + _outBuffSize_2 = _model_2->outBuffSize; + dir_2 = _model_2->model_dir + "/"; + + cap_w_2 = _model_2->_capture_w; + cap_h_2 = _model_2->_capture_h; + cap_c_2 = _model_2->_capture_c; + model_w_2 = _model_2->_model_w; + model_h_2 = _model_2->_model_h; + model_c_2 = _model_2->_model_c; + mode_2 = _model_2->_id; + + return 0; +} /** * @brief recognize_start * @details Start recognition @@ -157,6 +221,7 @@ int32_t RecognizeBase::recognize_start() if (0 != ret) { fprintf(stderr, "[ERROR] Failed to initialize USB Camera.\n"); + send_app_message("Failed to initialize USB Camera.\nCheck the camera connection."); return -1; } @@ -168,7 +233,8 @@ int32_t RecognizeBase::recognize_start() int32_t create_thread_cap = pthread_create(&_pthread_capture, NULL, capture_thread, this); if (0 != create_thread_cap) { - fprintf(stderr, "[ERROR] Failed to create AI Inference Thread.\n"); + fprintf(stderr, "[ERROR] Failed to create Capture Thread.\n"); + send_app_message("Failed to create Capture Thread.\nRestart the application."); return -1; } @@ -184,6 +250,7 @@ int32_t RecognizeBase::recognize_start() if (0 != create_thread_ai) { fprintf(stderr, "[ERROR] Failed to create AI Inference Thread.\n"); + send_app_message("Failed to create AI Inference Thread.\nRestart the application."); return -1; } @@ -195,6 +262,7 @@ int32_t RecognizeBase::recognize_start() if (0 != framerate_thread_cap) { fprintf(stderr, "[ERROR] Failed to create Framerate Thread.\n"); + send_app_message("Failed to create Framerate Thread.\nRestart the application."); return -1; } @@ -284,10 +352,10 @@ void* RecognizeBase::capture_thread(void* arg) #endif me->_camera_frame_count.store(me->_camera_frame_count.load() + 1); - if (0 == capture_addr) { fprintf(stderr, "[ERROR] Failed to _capture image from camera.\n"); + me->send_app_message("Failed to _capture image from camera.\nRestart the application."); break; } else @@ -334,6 +402,7 @@ void* RecognizeBase::capture_thread(void* arg) if (0 != ret) { fprintf(stderr, "[ERROR] Failed to enqueue _capture buffer.\n"); + me->send_app_message("Failed to enqueue _capture buffer.\nRestart the application."); break; } } /*End of Loop*/ @@ -370,9 +439,12 @@ void* RecognizeBase::tvm_inference_thread(void* arg) float preproc_time = 0; /* DRP-AI TVM[*1] Runtime object */ MeraDrpRuntimeWrapper runtime; + MeraDrpRuntimeWrapper runtime_2; /*Pre-processing output buffer pointer (DRP-AI TVM[*1] input data)*/ float* pre_output_ptr; uint32_t out_size; + float* pre_output_ptr_2; + uint32_t out_size_2; /*Inference Variables*/ int32_t inf_cnt = -1; @@ -381,15 +453,37 @@ void* RecognizeBase::tvm_inference_thread(void* arg) /*Inference output buffer*/ shared_ptr drpai_output_buf; recognizeData_t data; + InOutDataType input_data_type; + InOutDataType input_data_type_2; printf("Inference Thread Starting\n"); + if (!me->model_exist(me->dir)) + { + fprintf(stderr, "Please prepare the Model Object according to the GitHub (https://github.com/renesas-rz/rzv_drp-ai_tvm)\n"); + me->send_app_message("Failed to load Model Object : "+me->dir+"\nPrepare the Model Object according to the GitHub (https://github.com/renesas-rz/rzv_drp-ai_tvm)"); + return 0; + } + /*DRP-AI TVM[*1]::Load model_dir structure and its weight to runtime object */ runtime.LoadModel(me->dir); /*DRP-AI TVM[*1]::Get input data type*/ - auto input_data_type = runtime.GetInputDataType(0); + input_data_type = runtime.GetInputDataType(0); + if (MODE_TVM_UNKNOWN != me->mode_2) + { + if (!me->model_exist(me->dir_2)) + { + fprintf(stderr, "Please prepare the Model Object according to the GitHub (https://github.com/renesas-rz/rzv_drp-ai_tvm)\n"); + me->send_app_message("Failed to load Model Object : "+me->dir_2+"\nPrepare the Model Object according to the GitHub (https://github.com/renesas-rz/rzv_drp-ai_tvm)"); + return 0; + } + /*DRP-AI TVM[*1]::Load model_dir structure and its weight to runtime object */ + runtime_2.LoadModel(me->dir_2); + /*DRP-AI TVM[*1]::Get input data type*/ + input_data_type_2 = runtime_2.GetInputDataType(0); + } /*Inference Loop Start*/ while (me->_inf_running) { @@ -404,7 +498,7 @@ void* RecognizeBase::tvm_inference_thread(void* arg) } /*Pre-process*/ me->get_time(start_time); - me->inference_preprocess(arg, &pre_output_ptr, &out_size); + me->inference_preprocess(arg, me->mode, me->cap_w, me->cap_h, &pre_output_ptr, &out_size); me->get_time(end_time); preproc_time = (float)((me->timedifference_msec(start_time, end_time))); print_measure_log("AI preprocess Time", preproc_time, "ms"); @@ -419,7 +513,8 @@ void* RecognizeBase::tvm_inference_thread(void* arg) else { std::cerr << "[ERROR] Input data type : not FP32." << std::endl; - return 0; + me->send_app_message("Unsupported Input data type: not FP32."); + break; } /**DRP-AI TVM[*1]::Start Inference*/ @@ -440,18 +535,19 @@ void* RecognizeBase::tvm_inference_thread(void* arg) auto output_num = runtime.GetNumOutput(); drpai_output_buf.reset(new float[me->_outBuffSize], std::default_delete()); size_count = 0; + /*GetOutput loop*/ for (int i = 0;i(output_buffer). */ - int64_t out_size = std::get<2>(output_buffer); + int64_t output_size = std::get<2>(output_buffer); /*Output Data Type = std::get<0>(output_buffer)*/ if (InOutDataType::FLOAT16 == std::get<0>(output_buffer)) { /*Output Data = std::get<1>(output_buffer)*/ uint16_t* data_ptr = reinterpret_cast(std::get<1>(output_buffer)); - for (int j = 0; j(output_buffer)*/ float* data_ptr = reinterpret_cast(std::get<1>(output_buffer)); - for (int j = 0; jsend_app_message("Unsupported Output data type: not floating point."); + ret = -1; break; } - size_count += out_size; + size_count += output_size; + } + /*Error check in the GetOutput loop*/ + if (0 != ret) + { + break; } /*Fill AI Inference result structure*/ data.predict_image = me->input_data; data.predict_result = move(drpai_output_buf); - data.drp_time_ms = ai_time; + data.inf_time_ms = ai_time; data.preproc_time_ms = preproc_time; + /*Post-process start (AI inference result postprocess + image compress + JSON data sending)*/ - me->inference_postprocess(arg, data); + me->inference_postprocess(arg, me->mode, data); + + /*Second model processing starts*/ + uint32_t second_inf_cnt = 0; + if (MODE_TVM_UNKNOWN != me->mode_2) + { + float ai_time_2 = 0; + float preproc_time_2 = preproc_time + ai_time + data.postproc_time_ms; + /*Delete the Model 1 postprocessing time */ + data.postproc_time_ms = 0; + int32_t x = 0; + int32_t y = 0; + int32_t w = 0; + int32_t h = 0; + int32_t top = 0; + int32_t bottom = 0; + int32_t left = 0; + int32_t right = 0; + int32_t index = 0; + uint8_t first_input_data[me->cap_w*me->cap_h*me->cap_c]; + /*Copy the capture data to temporary buffer*/ + std::memcpy(first_input_data, me->input_data, me->cap_w*me->cap_h*me->cap_c*sizeof(uint8_t)); + + /*Get face detection result*/ + std::vector res = me->_model->detected_data; + me->_model_2->detected_data.clear(); + + /*Detection loop*/ + for (detection detected : res) + { + if (0 == detected.prob) continue; + + me->get_time(start_time); + + /*Copy the detected data to second model object*/ + me->_model_2->detected_data.push_back(detected); + + /*Crop detected area*/ + x = (int32_t) detected.bbox.x; + y = (int32_t) detected.bbox.y; + w = (int32_t) detected.bbox.w; + h = (int32_t) detected.bbox.h; + /*CPU YUYV only supports even number width.*/ + if (0!= (w % 2)) + { + if (0 > (w-1)) w -= 1; + else w += 1; + } + if (0 > x) x = 0; + if (me->cap_w - w <= x) x = me->cap_w - w - 1; + if (0 > y) y = 0; + if (me->cap_h - h <= y) y = me->cap_h - h - 1; + + uint8_t crop_out_ptr[w*h*me->cap_c]; + top = y; + bottom = top + h; + left = x / 2; + right = left + w / 2; + index = 0; + for (int j = top; j < bottom; j++) + { + for (int k = left; k < right; k++) + { + *((uint32_t *)&crop_out_ptr[index]) = *((uint32_t *)&first_input_data[j * me->cap_w * YUY2_NUM_CHANNEL + k * YUY2_NUM_DATA]); + index += YUY2_NUM_DATA; + } + } + + /*Second model pre-processing*/ + std::memcpy(me->input_data, crop_out_ptr, w*h*me->cap_c*sizeof(uint8_t)); + me->inference_preprocess(arg, me->mode_2, w, h, &pre_output_ptr_2, &out_size_2); + me->get_time(end_time); + preproc_time_2 +=(float)((me->timedifference_msec(start_time, end_time))); + + if (InOutDataType::FLOAT32 == input_data_type_2) + { + runtime_2.SetInput(0, pre_output_ptr_2); + } + else + { + std::cerr << "[ERROR] Second Model Input data type : not FP32." << std::endl; + me->send_app_message("For Second Model\nUnsupported Input data type: not FP32."); + ret = -1; + break; + } + errno = 0; + printf("For each detection ----------- No. %d\n", (second_inf_cnt++ + 1)); + /*Gets inference starting time*/ + me->get_time(start_time); + /*DRP-AI TVM[*1]::Second model Run inference*/ + runtime_2.Run(); + /*Gets AI Inference End Time*/ + me->get_time(end_time); + /*Inference End Time */ + ai_time_2 += (float)((me->timedifference_msec(start_time, end_time))); + print_measure_log("Cummurative Second AI Inference Time", ai_time_2, "ms"); + + /*Process to read the DRP-AI output data.*/ + /* DRP-AI TVM[*1]::Get the number of output of the target model. */ + auto output_num = runtime_2.GetNumOutput(); + drpai_output_buf.reset(new float[me->_outBuffSize_2], std::default_delete()); + size_count = 0; + /*GetOutput loop*/ + for (int i = 0;i(output_buffer). */ + int64_t output_size = std::get<2>(output_buffer); + /*Output Data Type = std::get<0>(output_buffer)*/ + if (InOutDataType::FLOAT16 == std::get<0>(output_buffer)) + { + /*Output Data = std::get<1>(output_buffer)*/ + uint16_t* data_ptr = reinterpret_cast(std::get<1>(output_buffer)); + for (int j = 0; j(output_buffer)) + { + /*Output Data = std::get<1>(output_buffer)*/ + float* data_ptr = reinterpret_cast(std::get<1>(output_buffer)); + for (int j = 0; jsend_app_message("For Second Model\nUnsupported Output data type: not floating point."); + ret = -1; + break; + } + size_count += output_size; + } + /*Error check in the GetOutput loop*/ + if (0 != ret) + { + break; + } + /*Fill AI Inference result structure*/ + data.predict_image = first_input_data; + data.predict_result = move(drpai_output_buf); + data.inf_time_ms = ai_time_2; + data.preproc_time_ms = preproc_time_2; + /*Post-process start (AI inference result postprocess + image compress + JSON data sending)*/ + me->inference_postprocess(arg, me->mode_2, data); + } + /*Error check in the Detection loop*/ + if (0 != ret) + { + break; + } + /*Image compress + JSON data sending*/ + me->send_result(arg, me->mode_2, data); + } + else + { + /*Image compress + JSON data sending*/ + me->send_result(arg, me->mode, data); + } Measuretime m("Deque inference_capture_qbuf buf time"); ret = capture->inference_capture_qbuf(); if (0 != ret) { fprintf(stderr, "[ERROR] Failed to enqueue _capture buffer.\n"); + me->send_app_message("Failed to enqueue _capture buffer.\nRestart the application."); break; } me->_ai_frame_count.store(me->_ai_frame_count.load() + 1); @@ -549,40 +817,37 @@ void* RecognizeBase::framerate_thread(void* arg) * @brief inference_preprocess * @details Preprocess * @param arg pointer to itself - * @param data inference result data + * @param model_id ID for model process to be run + * @param width new width of input data. + * @param height new height of input data. + * @param out out_ptr pre-processing result data + * @param out out_size size of out_ptr */ -void RecognizeBase::inference_preprocess(void* arg, float** out_ptr, uint32_t* out_size) +void RecognizeBase::inference_preprocess(void* arg,uint8_t model_id, uint32_t width, uint32_t height, float** out_ptr, uint32_t* out_size) { timespec start_time; timespec end_time; RecognizeBase* me = (RecognizeBase*)arg; Measuretime m("Pre process time"); - /*Select DRP-AI or CPU to run pre-processing */ - /*If mode is even number, DRP-AI. If it is odd number, CPU is selected. */ - if (0 == (me->mode & 1)) + if (me->mode != model_id) { - if (me->mode == MODE_TVM_HRNET_DRPAI) - { - _model->inf_pre_process_hrnet(me->input_data, me->capture_address, out_ptr, out_size); - } else - { - _model->inf_pre_process_drpai(me->capture_address, out_ptr, out_size); - } + _model_2->inf_pre_process(me->input_data, width, height, me->capture_address, out_ptr, out_size); } - else /*CPU Pre-processing*/ - { - _model->inf_pre_process_cpu(me->input_data, out_ptr); + else + { + _model->inf_pre_process(me->input_data, width, height, me->capture_address, out_ptr, out_size); } } /** * @brief inference_postprocess - * @details Postprocess and send command + * @details Postprocess * @param arg pointer to itself + * @param model_id ID for model process to be run * @param data inference result data */ -void RecognizeBase::inference_postprocess(void* arg, recognizeData_t& data) +void RecognizeBase::inference_postprocess(void* arg, uint8_t model_id, recognizeData_t& data) { timespec start_time; timespec end_time; @@ -592,10 +857,30 @@ void RecognizeBase::inference_postprocess(void* arg, recognizeData_t& data) me->get_time(start_time); { Measuretime m("Post process time"); - _model->inf_post_process(data.predict_result.get()); + if (me->mode != model_id) + { + _model_2->inf_post_process(data.predict_result.get()); + } + else + { + _model->inf_post_process(data.predict_result.get()); + } } me->get_time(end_time); - float post_time = (float)((me->timedifference_msec(start_time, end_time))); + data.postproc_time_ms += (float)((me->timedifference_msec(start_time, end_time))); + data.predict_result.reset(); +} + +/** + * @brief send_result + * @details Send command via http + * @param arg pointer to itself + * @param model_id ID for model processing to be run + * @param data inference result data + */ +void RecognizeBase::send_result(void* arg, uint8_t model_id, recognizeData_t& data) +{ + RecognizeBase* me = (RecognizeBase*)arg; string b64; shared_ptr notify; @@ -606,11 +891,26 @@ void RecognizeBase::inference_postprocess(void* arg, recognizeData_t& data) #endif #ifdef COUT_INFERENCE_RESULT_ON - _model->print_result(); + if (me->mode != model_id) + { + _model_2->print_result(); + } + else + { + _model->print_result(); + } #endif + { Measuretime m("Create predict result time"); - notify = _model->get_command(); + if (me->mode != model_id) + { + notify = _model_2->get_command(); + } + else + { + notify = _model->get_command(); + } } { @@ -619,16 +919,16 @@ void RecognizeBase::inference_postprocess(void* arg, recognizeData_t& data) notify->img = b64; notify->img_org_w = _model->_capture_w; notify->img_org_h = _model->_capture_h; - notify->drp_time = data.drp_time_ms; + notify->drp_time = data.inf_time_ms; notify->pre_time = data.preproc_time_ms; - notify->post_time = post_time; + notify->post_time = data.postproc_time_ms; /* Send websocket coomand*/ me->_server->send_command(notify->CreateRequest()); - + /*Reset postproc_time_ms*/ + data.postproc_time_ms = 0; } - data.predict_result.reset(); } /** @@ -772,3 +1072,45 @@ string RecognizeBase::get_send_image(uint8_t* image) } return b64; } + +/** + * @brief model_exist + * @details Check whether the Model Object files exist or not. + * @param dir path to directory of Model Object to be checked. + * @return int8_t non-zero if files exist + */ +int8_t RecognizeBase::model_exist(std::string dir) +{ + if (!file_exist(dir)) + { + fprintf(stderr, "[ERROR] Directory does not exist : dirname=%s\n", dir.c_str()); + return 0; + } + for (int i = 0;i #include "../includes.h" -// #include "../recognize/tvm/MeraDrpRuntimeWrapper.h" -// #include "../recognize/tvm/PreRuntime.h" #include "../camera/camera.h" #include "../util/system_analyzer.h" #include "../ws_server.h" #include "common/recognize_define.h" #include "common/MeraDrpRuntimeWrapper.h" -// #include "common/PreRuntime.h" #include "irecognize_model.h" #include "recognize_data.h" +#include "../command/app_message.h" +/*For two models processing*/ +#include "../command/object_detection.h" + #define WAIT_TIME (1000) /* microseconds */ /*Timer Related*/ @@ -81,23 +82,30 @@ class RecognizeBase ~RecognizeBase() {} int32_t initialize(IRecognizeModel* model); + /*For running two models in 1 loop.*/ + int32_t initialize(IRecognizeModel* model, IRecognizeModel* model_2); + virtual int32_t recognize_start(); virtual void recognize_end(); - private: static void* capture_thread(void* arg); static void* tvm_inference_thread(void* arg); static void* framerate_thread(void* arg); - void inference_preprocess(void* arg, float** out_ptr, uint32_t* out_size); - void inference_postprocess(void* arg, recognizeData_t& data); + void inference_preprocess(void* arg, uint8_t model_id, uint32_t width, uint32_t height, float** out_ptr, uint32_t* out_size); + void inference_postprocess(void* arg, uint8_t model_id, recognizeData_t& data); + void send_result(void* arg, uint8_t model_id, recognizeData_t& data); int32_t end_all_threads(); void close_camera(); int8_t wait_join(pthread_t* p_join_thread, uint32_t join_time); double timedifference_msec(struct timespec t0, struct timespec t1); - int32_t get_time(timespec& time_t); + int32_t get_time(timespec& time_t); string get_send_image(uint8_t* image); + void send_app_message(string message); + + int8_t file_exist(std::string filename); + int8_t model_exist(std::string dir); private: @@ -142,6 +150,31 @@ class RecognizeBase LinuxSystemAnalyzer _analyzer; + /*For two model application*/ + shared_ptr _model_2; + + std::string dir_2; + int32_t cap_w_2; + int32_t cap_h_2; + int32_t cap_c_2; + int32_t model_w_2; + int32_t model_h_2; + int32_t model_c_2; + uint8_t mode_2 = MODE_TVM_UNKNOWN; + int32_t _outBuffSize_2; + + constexpr static int32_t YUY2_NUM_CHANNEL = (2); + constexpr static int32_t YUY2_NUM_DATA = (4); + + constexpr static int8_t MODEL_OBJ_NUM = (3); + std::string model_obj_names[MODEL_OBJ_NUM] = + { + "deploy.json", + "deploy.params", + "deploy.so" + }; + + }; #endif diff --git a/how-to/sample_app/src/recognize/recognize_data.h b/how-to/sample_app/src/recognize/recognize_data.h index 2498765..f783b54 100644 --- a/how-to/sample_app/src/recognize/recognize_data.h +++ b/how-to/sample_app/src/recognize/recognize_data.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : recognize_data.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -39,14 +39,19 @@ struct recognizeData_t */ shared_ptr predict_result; /** - * @brief drp processig time + * @brief inference processig time * */ - float drp_time_ms; + float inf_time_ms; /** - * @brief drp pre-processig time + * @brief pre-processig time * */ float preproc_time_ms; + /** + * @brief post-processig time + * + */ + float postproc_time_ms; }; #endif //RECOGNIZE_DATA_H diff --git a/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.cpp b/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.cpp index e982759..b0b732d 100644 --- a/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.cpp +++ b/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_ultraface.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -48,32 +48,33 @@ TVM_UltraFace_DRPAI::TVM_UltraFace_DRPAI() : IRecognizeModel(TVM_MODEL_OUT_SIZE, model_dir = TVM_MODEL_DIR; std::cout << "DRP-AI TVM UltraFace model" << std::endl; } - /** - * @brief inf_pre_process_drpai - * @details Run pre-processing using Pre-processing Runtime (DRP-AI) + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address + * @param input_data Input data pointer + * @param width new input data width. + * @param height new input data width. * @param addr Physical address of input data buffer * @param out output_buf Output data buffer pointer holder * @param out buf_size Output data buffer size holder * @return int32_t success:0 error: != 0 */ -int32_t TVM_UltraFace_DRPAI::inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size) +int32_t TVM_UltraFace_DRPAI:: inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + in_param.pre_in_shape_w = _capture_w; + in_param.pre_in_shape_h = _capture_h; + } + pre_process_drpai(addr, arg, buf_size); return 0; } -/** - * @brief inf_pre_process_cpu - * @details Run pre-processing using CPU - * @param input_data Input data pointer - * @param out output_buf Output data buffer pointer holder - * @return int32_t success:0 error: != 0 - */ -int32_t TVM_UltraFace_DRPAI:: inf_pre_process_cpu(uint8_t* input_data, float** output_buf) -{ - /*Do nothing*/ - return 0; -} /** * @brief inf_post_process * @details implementation post process @@ -94,13 +95,13 @@ int32_t TVM_UltraFace_DRPAI::inf_post_process(float* arg) */ int32_t TVM_UltraFace_DRPAI::print_result() { - YoloCommon::print_boxes(postproc_data, label_file_map); + ObjectDetectionFunc::print_boxes(postproc_data, label_file_map); return 0; } /** * @brief get_command * @details Prepare the command to send via HTTP - * @return shared_ptr Pose detection result data + * @return shared_ptr Result data */ shared_ptr TVM_UltraFace_DRPAI::get_command() { @@ -181,5 +182,11 @@ int8_t TVM_UltraFace_DRPAI::post_process(std::vector& det, float* flo } /* Non-Maximum Supression filter */ filter_boxes_nms(det, det.size(), ULTRAFACE_TH_NMS); + + /*For pre face detection */ + /*Note that for running single UltraFace, this process is not required. */ + detected_data.resize(det.size()); + std::copy(det.begin(), det.end(), detected_data.begin()); + return 0; } diff --git a/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.h b/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.h index 9da7bd6..e9e4959 100644 --- a/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.h +++ b/how-to/sample_app/src/recognize/ultraface/tvm_drpai_ultraface.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_ultraface.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -35,9 +35,9 @@ #include "../../includes.h" #include "../common/box.h" #include "../common/functions.h" -#include "../common/yolo_common.h" -#include "../command/object_detection.h" +#include "../common/object_detection.h" #include "../common/PreRuntime.h" +#include "../command/object_detection.h" class TVM_UltraFace_DRPAI : public IRecognizeModel { @@ -63,8 +63,8 @@ class TVM_UltraFace_DRPAI : public IRecognizeModel constexpr static float ULTRAFACE_TH_NMS = (0.5);//from ONNX Model Zoo public: TVM_UltraFace_DRPAI(); - virtual int32_t inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size); - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf); + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); virtual int32_t inf_post_process(float* arg); virtual shared_ptr get_command(); virtual int32_t print_result(); diff --git a/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.cpp b/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.cpp index 404caf7..6cdc65d 100644 --- a/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.cpp +++ b/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_yolo.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -48,7 +48,7 @@ TVM_YOLO_DRPAI::TVM_YOLO_DRPAI(uint8_t id) : IRecognizeModel(0, TVM_MODEL_DIR_YO /*Load label list for YOLOv3/TinyYOLOv3 */ if (id == MODE_TVM_YOLOV3_DRPAI || id == MODE_TVM_TINYYOLOV3_DRPAI ) { - label_file_map = YoloCommon::load_label_file(LABEL_LIST.data()); + label_file_map = CommonFunc::load_label_file(LABEL_LIST.data()); } num_class = label_file_map.size(); @@ -139,37 +139,38 @@ TVM_YOLO_DRPAI::TVM_YOLO_DRPAI(uint8_t id) : IRecognizeModel(0, TVM_MODEL_DIR_YO outBuffSize = num_inf_out; } - /** - * @brief inf_pre_process_drpai - * @details Run pre-processing using Pre-processing Runtime (DRP-AI) + * @brief inf_pre_process + * @details Run pre-processing. + * @details For CPU input, use input_data for input data. + * @details For DRP-AI input, use addr for input data stored address + * @param input_data Input data pointer + * @param width new input data width. + * @param height new input data width. * @param addr Physical address of input data buffer * @param out output_buf Output data buffer pointer holder * @param out buf_size Output data buffer size holder * @return int32_t success:0 error: != 0 */ -int32_t TVM_YOLO_DRPAI::inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size) +int32_t TVM_YOLO_DRPAI:: inf_pre_process(uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size) { + /*Update width and height*/ + if ((width != _capture_w) || (height != _capture_h)) + { + _capture_w = width; + _capture_h = height; + in_param.pre_in_shape_w = _capture_w; + in_param.pre_in_shape_h = _capture_h; + } + pre_process_drpai(addr, arg, buf_size); return 0; } -/** - * @brief inf_pre_process_cpu - * @details Run pre-processing using CPU - * @param input_data Input data pointer - * @param out output_buf Output data buffer pointer holder - * @return int32_t success:0 error: != 0 - */ -int32_t TVM_YOLO_DRPAI:: inf_pre_process_cpu(uint8_t* input_data, float** output_buf) -{ - /*Do nothing*/ - return 0; -} /** * @brief inf_post_process - * @details implementation post process - * @param arg - * @return int32_t + * @details Run post-processing + * @param arg Inference output data pointer + * @return int32_t success:0 error: != 0 */ int32_t TVM_YOLO_DRPAI::inf_post_process(float* arg) { @@ -185,13 +186,13 @@ int32_t TVM_YOLO_DRPAI::inf_post_process(float* arg) */ int32_t TVM_YOLO_DRPAI::print_result() { - YoloCommon::print_boxes(postproc_data, label_file_map); + ObjectDetectionFunc::print_boxes(postproc_data, label_file_map); return 0; } /** * @brief get_command * @details Prepare the command to send via HTTP - * @return shared_ptr Pose detection result data + * @return shared_ptr Result data */ shared_ptr TVM_YOLO_DRPAI::get_command() { @@ -295,12 +296,12 @@ int8_t TVM_YOLO_DRPAI::post_process(std::vector& det, float* floatarr { for (x = 0; x < num_grid; x++) { - offs = YoloCommon::yolo_offset(n, b, y, x, num_grids.data(), num_bb, label_file_map.size()); + offs = ObjectDetectionFunc::yolo_offset(n, b, y, x, num_grids.data(), num_bb, label_file_map.size()); tx = floatarr[offs]; - ty = floatarr[YoloCommon::yolo_index(num_grid, offs, 1)]; - tw = floatarr[YoloCommon::yolo_index(num_grid, offs, 2)]; - th = floatarr[YoloCommon::yolo_index(num_grid, offs, 3)]; - tc = floatarr[YoloCommon::yolo_index(num_grid, offs, 4)]; + ty = floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 1)]; + tw = floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 2)]; + th = floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 3)]; + tc = floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 4)]; /* Compute the bounding box */ /*get_yolo_box/get_region_box in paper implementation*/ @@ -338,11 +339,11 @@ int8_t TVM_YOLO_DRPAI::post_process(std::vector& det, float* floatarr { if (_id == MODE_TVM_YOLOV3_DRPAI ||_id == MODE_TVM_TINYYOLOV3_DRPAI ) { - classes[i] = CommonFunc::sigmoid(floatarr[YoloCommon::yolo_index(num_grid, offs, 5 + i)]); + classes[i] = CommonFunc::sigmoid(floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 5 + i)]); } else // For YOLOv2/TinyYOLOv2 { - classes[i] = floatarr[YoloCommon::yolo_index(num_grid, offs, 5 + i)]; + classes[i] = floatarr[ObjectDetectionFunc::yolo_index(num_grid, offs, 5 + i)]; } } } diff --git a/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.h b/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.h index 1e4cf6f..3b3e8fd 100644 --- a/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.h +++ b/how-to/sample_app/src/recognize/yolo/tvm_drpai_yolo.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : tvm_drpai_yolo.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -35,9 +35,9 @@ #include "../../includes.h" #include "../common/box.h" #include "../common/functions.h" -#include "../common/yolo_common.h" -#include "../command/object_detection.h" +#include "../common/object_detection.h" #include "../common/PreRuntime.h" +#include "../command/object_detection.h" class TVM_YOLO_DRPAI : public IRecognizeModel { @@ -73,8 +73,8 @@ class TVM_YOLO_DRPAI : public IRecognizeModel public: TVM_YOLO_DRPAI(); TVM_YOLO_DRPAI(uint8_t id); - virtual int32_t inf_pre_process_drpai(uint32_t addr, float** arg, uint32_t* buf_size); - virtual int32_t inf_pre_process_cpu(uint8_t* input_data, float** output_buf); + virtual int32_t inf_pre_process + (uint8_t* input_data, uint32_t width, uint32_t height, uint32_t addr, float** arg, uint32_t* buf_size); virtual int32_t inf_post_process(float* arg); virtual shared_ptr get_command(); virtual int32_t print_result(); diff --git a/how-to/sample_app/src/recognize_proc.cpp b/how-to/sample_app/src/recognize_proc.cpp index 96cffec..203dfac 100644 --- a/how-to/sample_app/src/recognize_proc.cpp +++ b/how-to/sample_app/src/recognize_proc.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : recognize_proc.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -85,7 +85,20 @@ void RecognizeProc::switch_model(std::string model) } else if ("TVM_DRPAI_HRNET" == model) { - p_recog_base->initialize(new TVM_HRNET_DRPAI()); + p_recog_base->initialize(new TVM_HRNET_DRPAI(MODE_TVM_HRNET_DRPAI)); + } + else if ("TVM_DRPAI_HRNETV2" == model) + { + p_recog_base->initialize(new TVM_HRNET_DRPAI(MODE_TVM_HRNETV2_DRPAI)); + } + else if ("TVM_DRPAI_GOOGLENET" == model) + { + p_recog_base->initialize(new TVM_GoogleNet_DRPAI()); + } + else if ("TVM_DRPAI_EMOTIONFP" == model) + { + /* Run face detection and then run emotion estimation */ + p_recog_base->initialize(new TVM_UltraFace_DRPAI(), new TVM_EmotionFP_DRPAI()); } p_recog_base->recognize_start(); diff --git a/how-to/sample_app/src/recognize_proc.h b/how-to/sample_app/src/recognize_proc.h index 9d8ac1a..e4d11fb 100644 --- a/how-to/sample_app/src/recognize_proc.h +++ b/how-to/sample_app/src/recognize_proc.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : recognize_proc.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ @@ -37,6 +37,8 @@ #include "recognize/yolo/tvm_drpai_yolo.h" #include "recognize/ultraface/tvm_drpai_ultraface.h" #include "recognize/hrnet/tvm_drpai_hrnet.h" +#include "recognize/googlenet/tvm_drpai_googlenet.h" +#include "recognize/emotionfp/tvm_drpai_emotionfp.h" using namespace std; class RecognizeProc diff --git a/how-to/sample_app/src/util/measure_time.h b/how-to/sample_app/src/util/measure_time.h index 82e3c1a..143c7a6 100644 --- a/how-to/sample_app/src/util/measure_time.h +++ b/how-to/sample_app/src/util/measure_time.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : measure_time.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/util/string_formatter.h b/how-to/sample_app/src/util/string_formatter.h index 3c5f7a4..4c8a878 100644 --- a/how-to/sample_app/src/util/string_formatter.h +++ b/how-to/sample_app/src/util/string_formatter.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : string_formatter.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/util/system_analyzer.cpp b/how-to/sample_app/src/util/system_analyzer.cpp index 2e6d4b3..901cdf1 100644 --- a/how-to/sample_app/src/util/system_analyzer.cpp +++ b/how-to/sample_app/src/util/system_analyzer.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : system_analyzer.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/util/system_analyzer.h b/how-to/sample_app/src/util/system_analyzer.h index c41a443..7509afd 100644 --- a/how-to/sample_app/src/util/system_analyzer.h +++ b/how-to/sample_app/src/util/system_analyzer.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : system_analyzer.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/ws_server.cpp b/how-to/sample_app/src/ws_server.cpp index c8c578a..38b8b1b 100644 --- a/how-to/sample_app/src/ws_server.cpp +++ b/how-to/sample_app/src/ws_server.cpp @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : ws_server.cpp -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/how-to/sample_app/src/ws_server.h b/how-to/sample_app/src/ws_server.h index 87af953..7afe232 100644 --- a/how-to/sample_app/src/ws_server.h +++ b/how-to/sample_app/src/ws_server.h @@ -18,7 +18,7 @@ ***********************************************************************************************************************/ /*********************************************************************************************************************** * File Name : ws_server.h -* Version : 1.0.2 +* Version : 1.0.3 * Description : RZ/V2MA DRP-AI TVM[*1] Sample Application for USB Camera HTTP version * *1 DRP-AI TVM is powered by EdgeCortix MERA(TM) Compiler Framework. ***********************************************************************************************************************/ diff --git a/img/sample_app_page.png b/img/sample_app_page.png index f8d94e4..1538c3c 100644 Binary files a/img/sample_app_page.png and b/img/sample_app_page.png differ diff --git a/setup/README.md b/setup/README.md index 8a93740..7933dbf 100644 --- a/setup/README.md +++ b/setup/README.md @@ -53,16 +53,13 @@ pip3 install --upgrade pip apt-get -y install unzip vim pip3 install decorator attrs scipy numpy pytest onnx==1.9.0 pip3 install torch==1.8.0 torchvision==0.9.0 -``` -Installing ONNX Runtime Library from precompiled release package. -```sh # Install onnx runtime wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz -O /tmp/onnxruntime.tar.gz tar -xvzf /tmp/onnxruntime.tar.gz -C /tmp/ mv /tmp/onnxruntime-linux-x64-1.8.1/ /opt/ ``` -Setup DRP-AI TVM[^1] environment. +### 4. Setup DRP-AI TVM[^1] environment. ```sh cd <.../drp-ai_tvm> bash setup/make_drp_env.sh