You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which is even a bit faster than MobileNetV2 (1.7 ms, 71.8% top-1), and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
22
+
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which runs as fast as MobileNetV2x1.4 (1.6 ms, 74.7% top-1), and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
23
23
</details>
24
24
25
25
26
26
<br>
27
-
Code coming soon.
28
27
29
28
30
29
31
-
## EfficientFormer Model Zoo
30
+
## Classification on ImageNet-1K
32
31
33
-
### ImageNet-1K
32
+
### Models
34
33
| Model | Top-1 Acc.| Latency on iPhone 12 (ms) | Pytorch Checkpoint | CoreML | ONNX |
The latency reported is based on the open-source [CoreMLTools](https://github.com/apple/coremltools).
43
43
@@ -46,6 +46,63 @@ The latency reported is based on the open-source [CoreMLTools](https://github.co
46
46
*Tips*: MacOS+XCode and a mobile device (iPhone 12) are needed to reproduce the reported speed.
47
47
48
48
49
+
50
+
51
+
52
+
53
+
54
+
## ImageNet
55
+
### Data preparation
56
+
57
+
Download and extract ImageNet train and val images from http://image-net.org/. The training and validation data are expected to be in the `train` folder and `val` folder respectively:
58
+
```
59
+
|-- /path/to/imagenet/
60
+
|-- train
61
+
|-- val
62
+
```
63
+
64
+
### Single machine multi-GPU training
65
+
66
+
We provide an example training script `dist_train.sh` using PyTorch distributed data parallel (DDP).
67
+
68
+
To train EfficientFormer-L1 on an 8-GPU machine:
69
+
70
+
```
71
+
sh dist_train.sh efficientformer_l1 8
72
+
```
73
+
74
+
Tips: specify your data path and experiment name in the script!
75
+
76
+
### Multi-node training
77
+
78
+
On a Slurm-managed cluster, multi-node training can be launched through [submitit](https://github.com/facebookincubator/submitit), for example,
79
+
80
+
```
81
+
sh slurm_train.sh efficientformer_l1
82
+
```
83
+
84
+
Tips: specify GPUs/CPUs/memory per node in the script based on your resource!
85
+
86
+
### Testing
87
+
88
+
We provide an example test script `dist_test.sh` using PyTorch distributed data parallel (DDP).
89
+
For example, to test EfficientFormer-L1 on an 8-GPU machine:
90
+
91
+
```
92
+
sh dist_test.sh efficientformer_l1 8 weights/efficientformer_l1_300d.pth
93
+
```
94
+
95
+
## Using EfficientFormer as backbone
96
+
[Object Detection and Instance Segmentation](detection/README.md)<br>
97
+
[Semantic Segmentation](segmentation/README.md)
98
+
## Acknowledgement
99
+
100
+
Classification (ImageNet) code base is partly built with [LeViT](https://github.com/facebookresearch/LeViT) and [PoolFormer](https://github.com/sail-sg/poolformer).
101
+
102
+
The detection and segmentation pipeline is from [MMCV](https://github.com/open-mmlab/mmcv) ([MMDetection](https://github.com/open-mmlab/mmdetection) and [MMSegmentation](https://github.com/open-mmlab/mmsegmentation)).
103
+
104
+
Thanks for the great implementations!
105
+
49
106
## Citation
50
107
51
108
If our code or models help your work, please cite our [paper](https://arxiv.org/abs/2206.01191):
@@ -57,3 +114,5 @@ If our code or models help your work, please cite our [paper](https://arxiv.org/
Detection and instance segmentation on MS COCO 2017 is implemented based on [MMDetection](https://github.com/open-mmlab/mmdetection). We follow settings and hyper-parameters of [PVT](https://github.com/whai362/PVT/tree/v2/segmentation)
4
+
and [PoolFormer](https://github.com/sail-sg/poolformer) for the comparison,
5
+
6
+
7
+
## Installation
8
+
9
+
Install [mmcv-full](https://github.com/open-mmlab/mmcv) and [MMDetection v2.19.0](https://github.com/open-mmlab/mmdetection/tree/v2.19.0),
10
+
Later versions should work as well.
11
+
The easiest way is to install via [MIM](https://github.com/open-mmlab/mim)
12
+
```
13
+
pip install -U openmim
14
+
mim install mmcv-full
15
+
mim install mmdet
16
+
```
17
+
18
+
## Data preparation
19
+
20
+
Prepare COCO 2017 dataset according to the [instructions in MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/1_exist_data_model.md#test-existing-models-on-standard-datasets).
21
+
The dataset should be organized as
22
+
```
23
+
detection
24
+
├── data
25
+
│ ├── coco
26
+
│ │ ├── annotations
27
+
│ │ ├── train2017
28
+
│ │ ├── val2017
29
+
│ │ ├── test2017
30
+
```
31
+
32
+
## ImageNet Pretraining
33
+
Put ImageNet-1K pretrained weights of backbone as
34
+
```
35
+
EfficientFormer
36
+
├── weights
37
+
│ ├── efficientformer_l1_300d.pth
38
+
│ ├── ...
39
+
```
40
+
41
+
## Testing
42
+
43
+
Weights trained on COCO 2017 can be downloaded [here](https://drive.google.com/drive/folders/1eajQgA39bkPpyonzl8UnpiEwngVGaMdm?usp=sharing).
44
+
We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use:
45
+
```
46
+
sh ./dist_test.sh config_file path/to/checkpoint #GPUs --eval bbox segm
47
+
```
48
+
49
+
For example, to test EfficientFormer-L1 on COCO 2017 on an 8-GPU machine,
50
+
51
+
```
52
+
sh ./dist_test.sh configs/mask_rcnn_efficientformer_l1_fpn_1x_coco.py path/to/efficientformer_l1_coco.pth 8 --eval bbox segm
53
+
```
54
+
55
+
## Training
56
+
### Single machine multi-GPU training
57
+
58
+
We provide PyTorch distributed data parallel (DDP) training script `dist_train.sh`, for example, to train EfficientFormer-L1 on an 8-GPU machine:
59
+
```
60
+
sh ./dist_train.sh configs/mask_rcnn_efficientformer_l1_fpn_1x_coco.py 8
61
+
```
62
+
Tips: specify configs and #GPUs!
63
+
64
+
### Multi-node training
65
+
On Slurm-managed cluster, multi-node training can be launched by `slurm_train.sh`, similarly, to train EfficientFormer:
66
+
```
67
+
sh ./slurm_train.sh your-partition exp-name config-file work-dir
68
+
```
69
+
Tips: specify GPUs/CPUs/memory per node in the script `slurm_train.sh` based on your resource!
0 commit comments