Skip to content

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

License

Notifications You must be signed in to change notification settings

viven-inc/dressing-in-order

 
 

Repository files navigation

Dressing in Order (DiOr)

👕 ICCV'21 Paper | 👖 Project Page | 👚 arXiv | 🎽 Video Talk | 👗 Running This Code

PWC

The official implementation of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing." by Aiyu Cui, Daniel McKee and Svetlana Lazebnik. (ICCV 2021)

🔔 Updates

Supported Try-on Applications

Supported Editing Applications

More results

Play with demo.ipynb!


Get Started

Please follow the installation instruction in GFLA to install the environment.

Then run

pip install -r requirements.txt

If one wants to run inference only: You can use later version of PyTorch and you don't need to worry about how to install GFLA's cuda functions. Please specify --frozen_flownet.

Dataset

We run experiments on Deepfashion Dataset. To set up the dataset:

  1. Download and unzip img_highres.zip from the deepfashion inshop dataset at $DATA_ROOT
  2. Download the train/val split and pre-processed keypoints annotations from GFLA source or PATN source, and put the .csv and .lst files at $DATA_ROOT.
    • If one wants to extract the keypoints from scratch/for your dataset, please run OpenPose as the pose estimator or follow the instruction from PATN to generate the keypoints in desired format. (Check Issue #21 for more details.)
  3. Run python tools/generate_fashion_dataset.py --dataroot $DATAROOT to split the data.
  4. Get human parsing. You can obtain the parsing by either:
    • Run off-the-shelf human parser SCHP (with LIP labels) on $DATA_ROOT/train and $DATA_ROOT/test. Name the output parses folder as $DATA_ROOT/trainM_lip and $DATA_ROOT/testM_lip respectively.
    • Download the preprocessed parsing from here and put it under $DATA_ROOT.
  5. Download standard_test_anns.txt for fast visualization.

After the processing, you should have the dataset folder formatted like:

+ $DATA_ROOT
|   + train (all training images)
|   |   - xxx.jpg
|   |     ...
|   + trainM_lip (human parse of all training images)
|   |   - xxx.png
|   |     ...
|   + test (all test images)
|   |   - xxx.jpg
|   |     ...
|   + testM_lip (human parse of all test images)
|   |   - xxx.png
|   |     ...
|   - fashion-pairs-train.csv (paired poses for training)
|   - fashion-pairs-test.csv (paired poses for test)
|   - fashion-annotation-train.csv (keypoints for training images)
|   - fashion-annotation-test.csv  (keypoints for test images)
|   - train.lst
|   - test.lst
|   - standard_test_anns.txt

Run Demo

Please download the pretrained weights from here and unzip at checkpoints/.

After downloading the pretrained model and setting the data, you can try out our applications in notebook demo.ipynb.

(The checkpoints above are reproduced, so there could be slightly difference in quantitative evaluation from the reported results. To get the original results, please check our released generated images here.)

(DIORv1_64 was trained with a minor difference in code, but it may give better visual results in some applications. If one wants to try it, specify --netG diorv1.)


Training

Warmup the Global Flow Field Estimator

Note, if you don't want to warmup the Global Flow Field Estimator, you can extract its weights from GFLA by downloading the pretrained weights GFLA from here. (Check Issue #23 for how to extract weights from GFLA.)

Otherwise, run

sh scripts/run_pose.sh

Training

After warming up the flownet, train the pipeline by

sh scripts/run_train.sh

Run tensorboard --logdir checkpoints/$EXP_NAME/train to check tensorboard.

Note: Resetting discriminators may help training when it stucks at local minimals.

Evaluations

Download Generated Images

Here are our generated images which are used for the evaluation reported in the paper. (Deepfashion Dataset)

SSIM, FID and LPIPS

To run evaluation (SSIM, FID and LPIPS) on pose transfer task:

sh scripts/run_eval.sh

Cite us!

If you find this work is helpful, please consider starring 🌟 this repo and cite us as

@InProceedings{Cui_2021_ICCV,
    author    = {Cui, Aiyu and McKee, Daniel and Lazebnik, Svetlana},
    title     = {Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-On and Outfit Editing},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14638-14647}
}

Acknowledgements

This repository is built up on GFLA, pytorch-CycleGAN-and-pix2pix, PATN and MUNIT. Please be aware of their licenses when using the code.

Thanks a lot for the great work to the pioneer researchers!

About

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 87.9%
  • Python 10.2%
  • Cuda 1.7%
  • Other 0.2%