@@ -3,27 +3,21 @@ for some high-level background about deployment.
3
3
4
4
This directory contains the following examples:
5
5
6
- 1 . An example script ` export_model.py ` (previously called ` caffe2_converter.py ` )
6
+ 1 . An example script ` export_model.py `
7
7
that exports a detectron2 model for deployment using different methods and formats.
8
8
9
- 2 . A few C++ examples that run inference with Mask R-CNN model in Caffe2/ TorchScript format.
9
+ 2 . A C++ example that runs inference with Mask R-CNN model in TorchScript format.
10
10
11
11
## Build
12
- All C++ examples depend on libtorch and OpenCV. Some require more dependencies:
13
-
14
- * Running caffe2-format models requires:
15
- * libtorch built with caffe2 inside
16
- * gflags, glog
17
- * protobuf library that matches the version used by PyTorch (version defined in ` include/caffe2/proto/caffe2.pb.h ` of your PyTorch installation)
18
- * MKL headers if caffe2 is built with MKL
19
- * Running TorchScript-format models produced by ` --export-method=caffe2_tracing ` requires no other dependencies.
20
- * Running TorchScript-format models produced by ` --export-method=tracing ` requires libtorchvision (C++ library of torchvision).
21
-
22
- We build all examples with one ` CMakeLists.txt ` that requires all the above dependencies.
23
- Adjust it if you only need one example.
24
- As a reference,
25
- we provide a [ Dockerfile] ( ../../docker/deploy.Dockerfile ) that
26
- installs all the above dependencies and builds the C++ examples.
12
+ Deployment depends on libtorch and OpenCV. Some require more dependencies:
13
+
14
+ * Running TorchScript-format models produced by ` --export-method=caffe2_tracing ` requires libtorch
15
+ to be built with caffe2 enabled.
16
+ * Running TorchScript-format models produced by ` --export-method=tracing/scripting ` requires libtorchvision (C++ library of torchvision).
17
+
18
+ All methods are supported in one C++ file that requires all the above dependencies.
19
+ Adjust it and remove code you don't need.
20
+ As a reference, we provide a [ Dockerfile] ( ../../docker/deploy.Dockerfile ) that installs all the above dependencies and builds the C++ example.
27
21
28
22
## Use
29
23
@@ -59,26 +53,14 @@ We show a few example commands to export and execute a Mask R-CNN model in C++.
59
53
```
60
54
61
55
62
- * ` export-method=caffe2_tracing, format=caffe2 ` (caffe2 format will be deprecated):
63
- ```
64
- ./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
65
- --output ./output --export-method caffe2_tracing --format caffe2 \
66
- MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \
67
- MODEL.DEVICE cpu
68
-
69
- ./build/caffe2_mask_rcnn --predict_net=output/model.pb --init_net=output/model_init.pb --input=input.jpg
70
- ```
71
-
72
-
73
56
## Notes:
74
57
75
58
1 . Tracing/Caffe2-tracing requires valid weights & sample inputs.
76
59
Therefore the above commands require pre-trained models and [ COCO dataset] ( https://detectron2.readthedocs.io/tutorials/builtin_datasets.html ) .
77
60
You can modify the script to obtain sample inputs in other ways instead of from COCO.
78
61
79
- 2 . ` --run-eval ` is implemented only for certain modes
80
- (caffe2_tracing with caffe2 format, or tracing with torchscript format)
62
+ 2 . ` --run-eval ` is implemented only for tracing mode
81
63
to evaluate the exported model using the dataset in the config.
82
64
It's recommended to always verify the accuracy in case the conversion is not successful.
83
65
Evaluation can be slow if model is exported to CPU or dataset is too large ("coco_2017_val_100" is a small subset of COCO useful for evaluation).
84
- Caffe2 accuracy may be slightly different (within 0.1 AP) from original model due to numerical precisions between different runtime.
66
+ ` caffe2_tracing ` accuracy may be slightly different (within 0.1 AP) from original model due to numerical precisions between different runtime.
0 commit comments