This project replicates the implementation of image-image translation from Pix2Pix project. The project only focuses on implementing Edges to Shoes image translation using PyTorch.
The original study and codes can be found on here
- Linux or macOS
- Python 3
- PyTorch
- Jupyter Notebook
- GPU (alternativey you can use GPU provided by GCP or AWS)
Download the Edges2Shoes dataset. The dataset contains 50K of various shoes images which can be found in UT Zappos50K website.
Alternatively, you can download the dataset via wget from the terminal or jupyter notebook.
wget "http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/edges2shoes.tar.gz" -P "DESTINATION_PATH"
Note: Using second option is more ideal as the downloaded dataset contains both preprocessed HED and original images.
You can run the jupyter notebook on local machine that installed with GPU.
This project also provides an example of notebook that was ran in google Golab. However, the notebook only trained with 2000 training images due to some contraints in performing training with GCP. It took about eight hours to train 500 training epochs with GCP.
- Refer from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
- https://github.com/mrzhu-cool/pix2pix-pytorch
- MachineLearning Mastery on https://machinelearningmastery.com/how-to-develop-a-pix2pix-gan-for-image-to-image-translation/
- Udacity Deep Learning (https://github.com/udacity/deep-learning-v2-pytorch)