|
| 1 | +# Towards Efficient and Scale-Robust Ultra-High-Definition Image Demoireing |
| 2 | + |
| 3 | +**Towards Efficient and Scale-Robust Ultra-High-Definition Image Demoireing** (ECCV 2022) |
| 4 | +Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, [Xiaojuan Qi](https://scholar.google.com/citations?user=bGn0uacAAAAJ&hl=en). |
| 5 | +<br>[Paper(Coming soon)](---), [Project_page](https://xinyu-andy.github.io/uhdm-page/), [Dataset](https://drive.google.com/drive/folders/1DyA84UqM7zf3CeoEBNmTi_dJ649x2e7e?usp=sharing) |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +## Introduction |
| 10 | +When photographing the contents displayed on the digital screen, an inevitable frequency aliasing between the camera’s |
| 11 | +color filter array (CFA) and the screen’s LCD subpixel widely exists. The captured images are thus mixed with colorful |
| 12 | +stripes, named moire patterns, which severely degrade images’ perceptual qualities. Although a plethora of dedicated |
| 13 | +demoire methods have been proposed in the research community recently, yet is still far from achieving promising results |
| 14 | +in the real-world scenes. The key limitation of these methods is that they all only conduct research on low-resolution or |
| 15 | +synthetic images. However, with the rapid development of mobile devices, modern widely-used mobile phones typically allow |
| 16 | +users to capture 4K resolution (i.e.,ultra-high-definition) images, and thus the effectiveness of these methods on this |
| 17 | +practical scenes is not promised. In this work, we explore moire pattern removal for ultra-high-definition images. |
| 18 | +First, we propose the first ultra-high-definition demoireing dataset (UHDM), which contains 5,000 real-world 4K |
| 19 | +resolution image pair, and conduct a benchmark study on the current state of the art. Then, we analyze limitations |
| 20 | +of the state of the art and summarize the key issue of them, i.e., not scale-robust. To address their deficiencies, |
| 21 | +we deliver a plug-and-play semantic-aligned scale-aware module which helps us to build a frustratingly simple baseline |
| 22 | +model for tackling 4K moire images. Our framework is easy to implement and fast for inference, achieving state-of-the-art |
| 23 | +results on four demoireing datasets while being much more lightweight. |
| 24 | +We hope our investigation could inspire more future research in this more practical setting in image demoireing. |
| 25 | + |
| 26 | + |
| 27 | +## Environments |
| 28 | + |
| 29 | +First you have to make sure that you have installed all dependencies. To do so, you can create an anaconda environment called `esdnet` using |
| 30 | + |
| 31 | +``` |
| 32 | +conda env create -f environment.yaml |
| 33 | +conda activate esdnet |
| 34 | +``` |
| 35 | + |
| 36 | +Our implementation has been tested on one NVIDIA 3090 GPU with cuda 11.2. |
| 37 | + |
| 38 | +## Dataset |
| 39 | + |
| 40 | +We provide the 4K dataset UHDM for you to evaluate a pretrained model or train a new model. |
| 41 | +To this end, you can download them [here](https://drive.google.com/drive/folders/1DyA84UqM7zf3CeoEBNmTi_dJ649x2e7e?usp=sharing), |
| 42 | +or you can simply run the following command for automatic data downloading: |
| 43 | +``` |
| 44 | +bash scripts/download_data.sh |
| 45 | +``` |
| 46 | +Then the dataset will be available in the folder `uhdm_data/`. |
| 47 | + |
| 48 | +## Train |
| 49 | +To train a model from scratch, simply run: |
| 50 | + |
| 51 | +``` |
| 52 | +python train.py --config CONFIG.yaml |
| 53 | +``` |
| 54 | +where you replace `CONFIG.yaml` with the name of the configuration file you want to use. |
| 55 | +We have included configuration files for each dataset under the folder `config/`. |
| 56 | + |
| 57 | +For example, if you want to train our lightweight model ESDNet on UHDM dataset, run: |
| 58 | +``` |
| 59 | +python train.py --config ./config/uhdm_config.yaml |
| 60 | +``` |
| 61 | + |
| 62 | + |
| 63 | +## Test |
| 64 | +To test a model, you can also simply run: |
| 65 | + |
| 66 | +``` |
| 67 | +python test.py --config CONFIG.yaml |
| 68 | +``` |
| 69 | + |
| 70 | +where you need to specify the value of `TEST_EPOCH` in the `CONFIG.yaml` to evaluate a model trained after specific epochs, |
| 71 | +or you can also specify the value of `LOAD_PATH` to directly load a pre-trained checkpoint. |
| 72 | + |
| 73 | +We provide pre-trained models [here](https://drive.google.com/drive/folders/12buOOBKDBdQ65gM8U1rRNpSHppQ_u9Lr?usp=sharing). |
| 74 | +To download the checkpoints, you can also simply run: |
| 75 | + |
| 76 | +``` |
| 77 | +bash scripts/download_model.sh |
| 78 | +``` |
| 79 | + |
| 80 | +Then the checkpoints will be included in the folder `pretrain_model/`. |
| 81 | + |
| 82 | + |
| 83 | +## Contact |
| 84 | +If you have any questions, you can email me ( [email protected]). |
| 85 | + |
| 86 | + |
0 commit comments