Note: This code is not maintained. Please refer to Peppa_Pig_Face_Landmark, The TRAIN subdir.There is better model.
A simple face aligment method, based on tensorflow2.0
This is the tensorflow2.0 branch, if u need to work on tf1 switch to branch tf1, it still work.
It is simple and flexible, trained with wingloss , multi task learning, also with data augmentation based on headpose and face attributes(eyes state and mouth state).
And i suggest that you could try with another project,including face detect and keypoints, and some optimizations were made, u can check it there [pappa_pig_face_engine].
Contact me if u have problem about it. [email protected] :)
demo pictures:
this gif is from github.com/610265158/Peppa_Pig_Face_Engine, but it is the same model : )
pretrained model:
- baidu disk (code rt7p)
- google drive
shufflenetv2_0.75 including tflite model, (time cost: mac [email protected], tf2.0 5ms+, tflite 3.7ms+-)
- baidu disk (code fcdc)
- google drive
-
tensorflow2.0
-
tensorpack (for data provider)
-
opencv
-
python 3.6
- download all the 300W data set including the 300VW(parse as images, and make the label the same formate as 300W)
├── 300VW
│ ├── 001_annot
│ ├── 002_annot
│ ....
├── 300W
│ ├── 01_Indoor
│ └── 02_Outdoor
├── AFW
│ └── afw
├── HELEN
│ ├── testset
│ └── trainset
├── IBUG
│ └── ibug
├── LFPW
│ ├── testset
│ └── trainset
-
run
python make_json.py
produce train.json and val.json (if u like train u own data, please read the json produced , it is quite simple) -
then, run:
python train.py
-
by default it trained with shufflenetv2_1.0
- download the pretrained model keypoints, put it into ./model and the model dir structure is :
./model/
└── keypoints
├── saved_model.pb
└── variables
├── variables.data-00000-of-00002
├── variables.data-00001-of-00002
└── variables.index
-
set config.MODEL.pretrained_model='./model/keypoints/variables/variables', in train_config.py
-
adjust the lr policy
-
python train.py
-
modify the model path in toos/convert_to_tflite.py
-
python toos/convert_to_tflite.py
it will produce converted_model.tflite -
CAUTION: the pretrained model shufflenentv2_1.0 is not ok with tflite, because the shuffle op, but it was fixed, if u need 1.0 please retrain, or wait for me.
python vis.py --model ./model/keypoints
or python vis.py --model ./model/keypoints.tflite (need conver to tflite first)
-
A face detector is needed.
-
tflite model