Model | Download | Download (with sample test data) | ONNX version | Opset version | Top-1 accuracy (%) |
---|---|---|---|---|---|
DenseNet-121 | 32 MB | 33 MB | 1.1 | 3 | |
DenseNet-121 | 32 MB | 33 MB | 1.1.2 | 6 | |
DenseNet-121 | 32 MB | 33 MB | 1.2 | 7 | |
DenseNet-121 | 32 MB | 33 MB | 1.3 | 8 | |
DenseNet-121 | 32 MB | 33 MB | 1.4 | 9 | |
DenseNet-121-12 | 32 MB | 30 MB | 1.9 | 12 | 60.96 |
DenseNet-121-12-int8 | 9 MB | 6 MB | 1.9 | 12 | 60.20 |
Compared with the DenseNet-121-12, DenseNet-121-12-int8's op-1 accuracy drop ratio is 1.25% and performance improvement is 1.18x.
Note the performance depends on the test hardware.
Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
DenseNet-121 is a convolutional neural network for classification.
Densely Connected Convolutional Networks
Caffe2 DenseNet-121 ==> ONNX DenseNet
data_0: float[1, 3, 224, 224]
fc6_1: float[1, 1000, 1, 1]
random generated sampe test data:
- test_data_0.npz
- test_data_1.npz
- test_data_2.npz
- test_data_set_0
- test_data_set_1
- test_data_set_2
Mask R-CNN R-50-FPN-int8 is obtained by quantizing Mask R-CNN R-50-FPN-fp32 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
onnx: 1.9.0 onnxruntime: 1.10.0
wget https://github.com/onnx/models/raw/main/vision/classification/densenet-121/model/densenet-12.onnx
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--config=densenet.yaml \
--output_model=path/to/save
- mengniwang95 (Intel)
- airMeng (Intel)
- ftian1 (Intel)
- hshen14 (Intel)
MIT