Skip to content

Commit 7916a85

Browse files
committed
code release
1 parent 4621842 commit 7916a85

File tree

103 files changed

+11254
-13
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

103 files changed

+11254
-13
lines changed

.gitignore

+2-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,4 @@
1-
.flake8
2-
*.bak
3-
*.pyc
1+
efficientvit_l3_300d/
42
__pycache__
3+
*.pyc
54
.vscode
6-
.DS_Store*

LICENSE

+629
Large diffs are not rendered by default.

README.md

+68-9
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## EfficientFormer<br><sub>Vision Transformers at MobileNet Speed</sub>
22

3-
[arXiv](https://arxiv.org/abs/2206.01191)
3+
[arXiv](https://arxiv.org/abs/2206.01191) | [PDF](https://arxiv.org/pdf/2206.01191.pdf)
44

55

66
<p align="center">
@@ -19,25 +19,25 @@
1919
<summary>
2020
<font size="+1">Abstract</font>
2121
</summary>
22-
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which is even a bit faster than MobileNetV2 (1.7 ms, 71.8% top-1), and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
22+
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which runs as fast as MobileNetV2x1.4 (1.6 ms, 74.7% top-1), and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
2323
</details>
2424

2525

2626
<br>
27-
Code coming soon.
2827

2928

3029

31-
## EfficientFormer Model Zoo
30+
## Classification on ImageNet-1K
3231

33-
### ImageNet-1K
32+
### Models
3433
| Model | Top-1 Acc.| Latency on iPhone 12 (ms) | Pytorch Checkpoint | CoreML | ONNX |
3534
| :--- | :---: | :---: | :---: |:---: | :---: |
36-
| EfficientFormer-L1 | 79.2 (80.2) | 1.6| [L1-300](https://drive.google.com/file/d/1wtEmkshLFEYFsX5YhBttBOGYaRvDR7nu/view?usp=sharing) ([L1-1000](https://drive.google.com/file/d/11SbX-3cfqTOc247xKYubrAjBiUmr818y/view?usp=sharing)) | [L1-mlmodel](https://drive.google.com/file/d/1MEDcyeKCBmrgVGrHX8wew3l4ge2CWdok/view?usp=sharing) | [L1](https://drive.google.com/file/d/10NMPW8SLLiTa2jwTTuILDQRUzMvehmUM/view?usp=sharing) |
37-
| EfficientFormer-L3 | 82.4 | 3.0| [L3](https://drive.google.com/file/d/1OyyjKKxDyMj-BcfInp4GlDdwLu3hc30m/view?usp=sharing) | [L3-mlmodel](https://drive.google.com/file/d/12xb0_6pPAy0OWdW39seL9TStIqKyguEj/view?usp=sharing) | [L3](https://drive.google.com/file/d/1DEbsOEzP4ljS6-ka86BtwQWiVxkylCaX/view?usp=sharing) |
38-
| EfficientFormer-L7 | 83.3 | 7.0| [L7](https://drive.google.com/file/d/1cVw-pctJwgvGafeouynqWWCwgkcoFMM5/view?usp=sharing) | [L7-mlmodel](https://drive.google.com/file/d/1CnhAyfylpvvebT9Yn3qF8vrUFjZjuO3F/view?usp=sharing) | [L7](https://drive.google.com/file/d/1u6But9JQ9Wd7vlaFTGcYm5FiGnQ8y9eS/view?usp=sharing) |
35+
| EfficientFormer-L1 | 79.2 (80.2) | 1.6| [L1-300](https://drive.google.com/file/d/1wtEmkshLFEYFsX5YhBttBOGYaRvDR7nu/view?usp=sharing) ([L1-1000](https://drive.google.com/file/d/11SbX-3cfqTOc247xKYubrAjBiUmr818y/view?usp=sharing)) | [L1](https://drive.google.com/file/d/1MEDcyeKCBmrgVGrHX8wew3l4ge2CWdok/view?usp=sharing) | [L1](https://drive.google.com/file/d/10NMPW8SLLiTa2jwTTuILDQRUzMvehmUM/view?usp=sharing) |
36+
| EfficientFormer-L3 | 82.4 | 3.0| [L3](https://drive.google.com/file/d/1OyyjKKxDyMj-BcfInp4GlDdwLu3hc30m/view?usp=sharing) | [L3](https://drive.google.com/file/d/12xb0_6pPAy0OWdW39seL9TStIqKyguEj/view?usp=sharing) | [L3](https://drive.google.com/file/d/1DEbsOEzP4ljS6-ka86BtwQWiVxkylCaX/view?usp=sharing) |
37+
| EfficientFormer-L7 | 83.3 | 7.0| [L7](https://drive.google.com/file/d/1cVw-pctJwgvGafeouynqWWCwgkcoFMM5/view?usp=sharing) | [L7](https://drive.google.com/file/d/1CnhAyfylpvvebT9Yn3qF8vrUFjZjuO3F/view?usp=sharing) | [L7](https://drive.google.com/file/d/1u6But9JQ9Wd7vlaFTGcYm5FiGnQ8y9eS/view?usp=sharing) |
3938

40-
### Latency Measurement
39+
40+
## Latency Measurement
4141

4242
The latency reported is based on the open-source [CoreMLTools](https://github.com/apple/coremltools).
4343

@@ -46,6 +46,63 @@ The latency reported is based on the open-source [CoreMLTools](https://github.co
4646
*Tips*: MacOS+XCode and a mobile device (iPhone 12) are needed to reproduce the reported speed.
4747

4848

49+
50+
51+
52+
53+
54+
## ImageNet
55+
### Data preparation
56+
57+
Download and extract ImageNet train and val images from http://image-net.org/. The training and validation data are expected to be in the `train` folder and `val` folder respectively:
58+
```
59+
|-- /path/to/imagenet/
60+
|-- train
61+
|-- val
62+
```
63+
64+
### Single machine multi-GPU training
65+
66+
We provide an example training script `dist_train.sh` using PyTorch distributed data parallel (DDP).
67+
68+
To train EfficientFormer-L1 on an 8-GPU machine:
69+
70+
```
71+
sh dist_train.sh efficientformer_l1 8
72+
```
73+
74+
Tips: specify your data path and experiment name in the script!
75+
76+
### Multi-node training
77+
78+
On a Slurm-managed cluster, multi-node training can be launched through [submitit](https://github.com/facebookincubator/submitit), for example,
79+
80+
```
81+
sh slurm_train.sh efficientformer_l1
82+
```
83+
84+
Tips: specify GPUs/CPUs/memory per node in the script based on your resource!
85+
86+
### Testing
87+
88+
We provide an example test script `dist_test.sh` using PyTorch distributed data parallel (DDP).
89+
For example, to test EfficientFormer-L1 on an 8-GPU machine:
90+
91+
```
92+
sh dist_test.sh efficientformer_l1 8 weights/efficientformer_l1_300d.pth
93+
```
94+
95+
## Using EfficientFormer as backbone
96+
[Object Detection and Instance Segmentation](detection/README.md)<br>
97+
[Semantic Segmentation](segmentation/README.md)
98+
## Acknowledgement
99+
100+
Classification (ImageNet) code base is partly built with [LeViT](https://github.com/facebookresearch/LeViT) and [PoolFormer](https://github.com/sail-sg/poolformer).
101+
102+
The detection and segmentation pipeline is from [MMCV](https://github.com/open-mmlab/mmcv) ([MMDetection](https://github.com/open-mmlab/mmdetection) and [MMSegmentation](https://github.com/open-mmlab/mmsegmentation)).
103+
104+
Thanks for the great implementations!
105+
49106
## Citation
50107

51108
If our code or models help your work, please cite our [paper](https://arxiv.org/abs/2206.01191):
@@ -57,3 +114,5 @@ If our code or models help your work, please cite our [paper](https://arxiv.org/
57114
year={2022}
58115
}
59116
```
117+
118+

detection/README.md

+72
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
# Object Detection and Instance Segmentation
2+
3+
Detection and instance segmentation on MS COCO 2017 is implemented based on [MMDetection](https://github.com/open-mmlab/mmdetection). We follow settings and hyper-parameters of [PVT](https://github.com/whai362/PVT/tree/v2/segmentation)
4+
and [PoolFormer](https://github.com/sail-sg/poolformer) for the comparison,
5+
6+
7+
## Installation
8+
9+
Install [mmcv-full](https://github.com/open-mmlab/mmcv) and [MMDetection v2.19.0](https://github.com/open-mmlab/mmdetection/tree/v2.19.0),
10+
Later versions should work as well.
11+
The easiest way is to install via [MIM](https://github.com/open-mmlab/mim)
12+
```
13+
pip install -U openmim
14+
mim install mmcv-full
15+
mim install mmdet
16+
```
17+
18+
## Data preparation
19+
20+
Prepare COCO 2017 dataset according to the [instructions in MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/1_exist_data_model.md#test-existing-models-on-standard-datasets).
21+
The dataset should be organized as
22+
```
23+
detection
24+
├── data
25+
│ ├── coco
26+
│ │ ├── annotations
27+
│ │ ├── train2017
28+
│ │ ├── val2017
29+
│ │ ├── test2017
30+
```
31+
32+
## ImageNet Pretraining
33+
Put ImageNet-1K pretrained weights of backbone as
34+
```
35+
EfficientFormer
36+
├── weights
37+
│ ├── efficientformer_l1_300d.pth
38+
│ ├── ...
39+
```
40+
41+
## Testing
42+
43+
Weights trained on COCO 2017 can be downloaded [here](https://drive.google.com/drive/folders/1eajQgA39bkPpyonzl8UnpiEwngVGaMdm?usp=sharing).
44+
We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use:
45+
```
46+
sh ./dist_test.sh config_file path/to/checkpoint #GPUs --eval bbox segm
47+
```
48+
49+
For example, to test EfficientFormer-L1 on COCO 2017 on an 8-GPU machine,
50+
51+
```
52+
sh ./dist_test.sh configs/mask_rcnn_efficientformer_l1_fpn_1x_coco.py path/to/efficientformer_l1_coco.pth 8 --eval bbox segm
53+
```
54+
55+
## Training
56+
### Single machine multi-GPU training
57+
58+
We provide PyTorch distributed data parallel (DDP) training script `dist_train.sh`, for example, to train EfficientFormer-L1 on an 8-GPU machine:
59+
```
60+
sh ./dist_train.sh configs/mask_rcnn_efficientformer_l1_fpn_1x_coco.py 8
61+
```
62+
Tips: specify configs and #GPUs!
63+
64+
### Multi-node training
65+
On Slurm-managed cluster, multi-node training can be launched by `slurm_train.sh`, similarly, to train EfficientFormer:
66+
```
67+
sh ./slurm_train.sh your-partition exp-name config-file work-dir
68+
```
69+
Tips: specify GPUs/CPUs/memory per node in the script `slurm_train.sh` based on your resource!
70+
71+
72+

0 commit comments

Comments
 (0)