This branch is depreciated and will be replaced in the recent future by a newly repo with latest settings and refined network.
This repo is the official code for MW-GAN+ for Perceptual Quality Enhancement on Compressed Video (In submission), the improved version of our conference paper:
- Multi-level Wavelet-based Generative Adversarial Network for Perceptual Quality Enhancement of Compressed Video. Jianyi Wang, Xin Deng, Mai Xu, Congyong Chen, Yuhang Song.
Published on 16TH EUROPEAN CONFERENCE ON COMPUTER VISION in 2020. By MC2 Lab @ Beihang University.
Compressed video (QP=42) | Ours |
---|---|
![]() |
![]() |
:-------------------------: | :-------------------------: |
![]() |
![]() |
- Python 3 (Recommend to use Anaconda).
- PyTorch >= 0.4.0 (The original code is tested below pytorch=1.1.0 before pac.py was changed).
- See requirements.txt for other dependencies. You can just run:
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.1
python -m pip install pyyaml opencv-python tensorboard future scikit-image tqdm
Following BasicSR, we use datasets in LMDB format for faster IO speed.
- First run data_process.py to extract frames from videos.
- Then run extract_subimgs_single.py to cut frames into small pieces.
- Finally run create_lmdb_one.py to generate lmdb for training.
- Run create_lmdb_test.py to generate lmdb for test (You need to first run data_process.py to obtain test frames from videos).
- Run
python train.py -opt options/train/train_MWGAN_rgb.yml
for training. - Run
python test.py -opt options/test/test_MWGAN_rgb.yml
for test.
- train_MWGAN_yuv.yml and test_MWGAN_yuv.yml are for YUV training and test. You can also use these files to reproduce MW-GAN in our conference paper. You just need to change some settings and make some modifications in DenseMWNet_arch.
Here we provide a model trained for QP42. For other models you can just finetune on this model.
This repo is built mainly based on BasicSR. Also borrowing codes from pacnet, MWCNN_PyTorch and PerceptualSimilarity. We thank a lot for their contributions to the community.
If you find our paper or code useful for your research, please cite:
@inproceedings{wang2020multi,
title={Multi-level Wavelet-Based Generative Adversarial Network for Perceptual Quality Enhancement of Compressed Video},
author={Wang, Jianyi and Deng, Xin and Xu, Mai and Chen, Congyong and Song, Yuhang},
booktitle={European Conference on Computer Vision},
pages={405--421},
year={2020},
organization={Springer}
}