Skip to content

dabnn is an accelerated binary neural networks inference framework for mobile platform

License

Notifications You must be signed in to change notification settings

steelONIONknight/dabnn

 
 

Repository files navigation

dabnn

Build Status License jcenter Gitter Chat PRs Welcome

Enjoy binary neural networks on mobile!

Gitter: dabnn/dabnn, QQ group (Chinese):1021964010, answer: nndab

[English] [Chinese/中文]

Our ACM MM paper: https://arxiv.org/abs/1908.05858

Introduction

Binary neural networks (BNNs) have great potential on edge devices since they replace float operations by efficient bit-wise operations. However, to leverage the efficiency of bit-wise operations, the reimplmentation of convolution layer and also other layers is needed.

To our best knowledge, dabnn is the first highly-optimized binary neural networks inference framework for mobile platform. We implemented binary convolutions with ARM assembly. On Google Pixel 1, our dabnn is as 800%~2400% faster as BMXNet (the only one open-sourced BNN inference framework except dabnn to our best knowledge) on a single binary convolution, and as about 700% faster as it on binarized ResNet-18.

Comparison

Build

We provide pre-built onnx2bnn and also dabnn Android package. However, you need to build it if you want to deploy BNNs on non-Android ARM devices.

We use CMake build system like most C++ projects. Check out docs/build.md for the detailed instructions.

Convert ONNX Model

We provide a conversion tool, named onnx2bnn, to convert an ONNX model to a dabnn model. We provide onnx2bnn pre-built binaries for all platforms in GitHub Releases. For Linux users, the onnx2bnn pre-built binary is AppImage format, see https://appimage.org for details.

Note: Binary convolution is a custom operator, so whether the ONNX model is dabnn-comptabile heavily depends on the implementation of the binary convolution in the training code. Please read the documentation about model conversion carefully.

After conversion, the generated dabnn model can be deployed on ARM devices (e.g., mobile phones and embedded devices). For Android developer, we have provided Android AAR package and published it on jcenter, for the usage please check out example project.

Pretrained Models

We publish two pretrained binary neural network models based on Bi-Real Net on ImageNet. More pretrained models will be published in the future.

  • Bi-Real Net 18, 56.4% top-1 on ImageNet, 61.3ms/image on Google Pixel 1 (single thread). [dabnn] [ONNX]

  • Bi-Real Net 18 with Stem Module, 56.4% top-1 on ImageNet, 43.2ms/image on Google Pixel 1 (single thread). The detailed network structure is described in our paper. [dabnn] [ONNX]

Implementation Details

For more details please read our ACM MM paper.

Example project

Android app demo: https://github.com/JDAI-CV/dabnn-example

Related works using dabnn

The following two papers use dabnn to measure the latency of their binary networks on real devices:

License and Citation

BSD 3 Clause

Please cite daBNN in your publications if it helps your research:

@misc{zhang2019dabnn,
  Author = {Jianhao Zhang and Yingwei Pan and Ting Yao and He Zhao and Tao Mei},
  Title = {daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices},
  Year = {2019},
  Eprint = {arXiv:1908.05858},
}

About

dabnn is an accelerated binary neural networks inference framework for mobile platform

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 91.0%
  • C 4.8%
  • CMake 2.4%
  • Shell 1.2%
  • Other 0.6%