This is the official PyTorch implementation of the paper Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera.
- We propose a novel network with long- and short-term video representation learning by decoupling video degradations for the UDC video restoration task (D$^2$RNet), which is the first work to address UDC video degradation. The core decoupling attention module (DAM) enables a tailored solution to the degradation caused by different incident light intensities in the video.
- We propose a large-scale UDC video restoration dataset (VidUDC33K), which includes numerous challenging scenarios. To the best of our knowledge, this is the first dataset for UDC video restoration.
- Extensive quantitative and qualitative evaluations demonstrate the superiority of D$^2$RNet. In the proposed VidUDC33K dataset, D$^2$RNet gains 1.02db PSNR improvements more than other restoration methods.
- Download the original HDR video and real video from google drive and baidu drive(4k84) under
./dataset
. - Unzip the original HDR video and real video.
cd ./dataset
unzip Video.zip
unzip Real_Video.zip
- Generate the sequences for training and testing based on
synthvideo_meta.txt
andZTE_new_psf_5.npy
, run
python generate_synthvideo.py
The principle of obtaining synthetic dataset is as follows:
- Generate the sequences for real scenario validation based on
realvideo_meta.txt
, run
python generate_realdata.py
- Make VidUDC33K structure be:
├────dataset
├────VidUDC33K
├────Input
├────000
├────000.npy
├────...
├────049.npy
├────001
├────...
├────676
├────GT
├────000
├────000.npy
├────...
├────049.npy
├────001
├────...
├────676
├────VidUDC33K_real
├────Input
├────000
├────000.npy
├────...
├────049.npy
├────001
├────...
├────009
├────GT
├────000
├────000.npy
├────...
├────049.npy
├────001
├────...
├────009
├────synthvideo_meta.txt
├────realvideo_meta.txt
├────ZTE_new_psf_5.npy
The distribution of the dataset is as follows:
- Clone this github repo
git clone https://github.com/ChengxuLiu/DDRNet.git
cd DDRNet
- Prepare testing dataset and modify "folder_lq" and "folder_lq" in
./test.py
- Run test
python test.py --save_result
- The result are saved in
./results
- Clone this github repo
git clone https://github.com/ChengxuLiu/DDRNet.git
cd DDRNet
- Prepare training dataset and modify "dataroot_gt" and "dataroot_lq" in
./options/DDRNet/train_DDRNet.json
- Run training
python train.py --opt ./options/DDRNet/train_DDRNet.json
or
python -m torch.distributed.launch --nproc_per_node=4 --master_port=23333 train.py --opt ./options/DDRNet/train_DDRNet.json --dist True
- The models are saved in
./experiments
The output results on VidUDC33K testing set can be downloaded from google drive and baidu drive(4k84).
If you find the code and pre-trained models useful for your research, please consider citing our paper. 😊
@inproceedings{liu2024decoupling,
title = {Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera},
author = {Liu, Chengxu and Wang, Xuan and Fan, Yuanting Fan and Li, Shuai and Qian, Xueming},
booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence},
year = {2024}
}
If you meet any problems, please describe them in issues or contact:
- Chengxu Liu: [email protected]
The code of DDRNet is built upon RVRT, DISCNet, and MMagic, and we express our gratitude to these awesome projects.