PyTorch code for our TIP 2022 paper “Video Super-Resolution via a Spatio-Temporal Alignment Network” Paper or Researchgate.
CUDA 10.2/gcc 5.4/PyTorch 1.4
bash install.sh
python train.py -datasets_tasks W3_D1_C1_I1
lists/train_tasks_W3_D1_C1_I1.txt
specifies the dataset-task pairs for training and testing.
python test.py -method model_name -epoch N -dataset REDS4 -task SR_color/super-resolution
The pretrained model for STAN can be found from here.
If you find the code and paper useful in your research, please cite:
@article{wen2022video,
title={Video Super-Resolution via a Spatio-Temporal Alignment Network},
author={Wen, Weilei and Ren, Wenqi and Shi, Yinghuan and Nie, Yunfeng and Zhang, Jingang and Cao, Xiaochun},
journal={IEEE Transactions on Image Processing},
volume={31},
pages={1761--1773},
year={2022},
publisher={IEEE}
}
This project is based on [Learning Blind Video Temporal Consistency] and our filter adaptive alignment module is based on[STFAN].
@inproceedings{zhou2019stfan,
title={Spatio-Temporal Filter Adaptive Network for Video Deblurring},
author={Zhou, Shangchen and Zhang, Jiawei and Pan, Jinshan and Xie, Haozhe and Zuo, Wangmeng and Ren, Jimmy},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2019}
}
@inproceedings{Lai-ECCV-2018,
author = {Lai, Wei-Sheng and Huang, Jia-Bin and Wang, Oliver and Shechtman, Eli and Yumer, Ersin and Yang, Ming-Hsuan},
title = {Learning Blind Video Temporal Consistency},
booktitle = {European Conference on Computer Vision},
year = {2018}
}