Code Monkey home page Code Monkey logo

two-stage-knowledge-for-multiple-adverse-weather-removal's Introduction

Learning Multiple Adverse Weather Removal via Two-stage Knowledge Learning and Multi-contrastive Regularization: Toward a Unified Model

[CVPR2022] Official Pytorch based implementation.

[paper]


Abstract: In this paper, an ill-posed problem of multiple adverse weather removal is investigated. Our goal is to train a model with a 'unified' architecture and only one set of pretrained weights that can tackle multiple types of adverse weathers such as haze, snow, and rain simultaneously. To this end, a two-stage knowledge learning mechanism including knowledge collation (KC) and knowledge examination (KE) based on a multi-teacher and student architecture is proposed. At the KC, the student network aims to learn the comprehensive bad weather removal problem from multiple well-trained teacher networks where each of them is specialized in a specific bad weather removal problem. To accomplish this process, a novel collaborative knowledge transfer is proposed. At the KE, the student model is trained without the teacher networks and examined by challenging pixel loss derived by the ground truth. Moreover, to improve the performance of our training framework, a novel loss function called multi-contrastive knowledge regularization (MCR) loss is proposed. Experiments on several datasets show that our student model can achieve promising results on different bad weather removal tasks simultaneously.

Architecture

Overall Architecture

Collaborative Knowledge Trasfer

Multi-contrastive Regularization

Quantitative Result

Setting1

Qualitative Result

Setting1

Usage

Pre-trained Models

Install

git clone https://github.com/fingerk28/Two-stage-Knowledge-For-Multiple-Adverse-Weather-Removal.git

Training

python train.py --teacher TEACHER_CHECKPOINT_PATH_0 TEACHER_CHECKPOINT_PATH_1 TEACHER_CHECKPOINT_PATH_2 --save-dir RESULTS_WILL_BE_SAVED_HERE

--teacher → input any amout of teacher checkpoint path


You need to prepare the meta file (.json) under the ./meta

Class DatasetForTrain and DatasetForValid would take all meta files as the datasources.

The structure should be:

.
├── inference.py
├── meta
│   ├── train
│   │   ├── CSD_meta_train.json
│   │   └── Rain1400_meta_train.json
│   │   └── ...
│   └── valid
│       ├── CSD_meta_valid.json
│       └── Rain1400_meta_valid.json
│       └── ...
├── models
│   ├── ...
├── train.py
└── utils
    ├── ...


The structure of the .json file should be:

[
    [
        "path_to_GT_image0",
        "path_to_Input_image0"
    ],
    [
        "path_to_GT_image1",
        "path_to_Input_image1"
    ],
    [
    		 ...
    ],
    ...
    
]

Inference

python inference.py --dir_path DIR_OF_TEST_IMAGES --checkpoint CHECKPOINT_PATH --save_dir RESULTS_WILL_BE_SAVED_HERE 

Other Works for Image Restoration

You can also refer to our previous works:

Citation

Please cite this paper in your publications if it is helpful for your tasks.

@inproceedings{Chen2022MultiWeatherRemoval,
  title={Learning Multiple Adverse Weather Removal via Two-stage Knowledge Learning and Multi-contrastive Regularization: Toward a Unified Model},
  author={Chen, Wei-Ting and Huang, Zhi-Kai and Tsai, Cheng-Che and Yang, Hao-Hsiang and Ding, Jian-Jiun and Kuo, Sy-Yen},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

two-stage-knowledge-for-multiple-adverse-weather-removal's People

Contributors

fingerk28 avatar weitingchen83 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

two-stage-knowledge-for-multiple-adverse-weather-removal's Issues

Metrics on RGB or Y?

Hi,

Thank you for your nice work.
I want to know how you calculate the metrics for reporting.

Here are some results using your code and your model weights.

  1. python inference.py --checkpoint student-setting1.pth xxx and save the restored images.
  2. Calculate psnr and ssim using
    a) RGB: torchPSNR(pred_image, gt_image) + pytorch_ssim.ssim(pred_image, gt_image);
    b) Y: torchPSNR(rgb2ycbcr(pred_image[0]), rgb2ycbcr(gt_image[0])) + sk_cpt_ssim(rgb2ycbcr(pred_image[0]), rgb2ycbcr(gt_image[0]), data_range=1.0, multichannel=True).

The val data (and numbers) are: SOTS outdoor (500), Rain1400 (1400), CSD (2000).

Here are my results compared to the CVPR version:

Haze Rain Snow
RGB 31.35/0.9441 30.54/0.9004 30.10/0.9334
Y 33.39/0.9693 32.51/0.9218 31.68/0.9495
Yours 33.95/0.98 33.13/0.93 31.35/0.95

I think the metrics calculated on the Y channel are close to yours.
Can you clarify this? Thanks!

Error using tqdm ergodic dataset!

pBar = tqdm(train_loader, desc='Training')
for target_images, input_images in pBar:
if target_images is None: continue

Every time it runs, target_images and input_images is None.
image

But my data loader is functioning normally:
image

What is causing this?

outdoor-rain test set

Do you have the outdoor-rain test set? I downloaded it from the original paper and only saw the training set, not the test set. If not, how did you deal with it?

Datasets used for training

In your paper you mention: "At the training stage, we sample 5000 images from "OTS", "Rain 1400", and "CSD" as three individual training sets, respectively.". Could you specify which images you used from each dataset for training?

Could you share the meta files for training?

Hi,
Thank you for sharing your nice work.
I also want to know the training data, i.e., which 5000 images are chosen for each training set.
Could you share the meta files for training?
Thanks

picture failure

In the first picture ,is the position of LGPiexl and LTPiexl written backwards

About Test set of Setting2 (Rainfog)

Hi @fingerk28 , thanks for your amazing work!

I'm interesting about Setting2, i.e. Snow100k, Raindrop, Rainfog. However, I can't find corresponding test set of Rainfog. So I would like to ask where I can find the test set Test1 corresponding to Rainfog?

Looking forward to your reply!

how to lower batch size

Hi, when I tried to lower the batch size. I got tensor error message from dataset.py. Is there anyway to properly lower the batch size without getting error message? I use rain1400 dataset.
Error message:
File "train.py", line 321, in
main()
File "train.py", line 305, in main
train_kc_stage(model, teacher_networks, ckt_modules, train_loader, optimizer, scheduler, epoch, criterions)
File "train.py", line 99, in train_kc_stage
for target_images, input_images in pBar:
File "/home/nccu/.local/lib/python3.7/site-packages/tqdm-4.64.0-py3.7.egg/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/home/nccu/anaconda3/envs/twostage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/home/nccu/anaconda3/envs/twostage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/nccu/anaconda3/envs/twostage/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 61, in fetch
return self.collate_fn(data)
File "/home/nccu/Jason/Two-stage/utils/dataset.py", line 135, in call
input_images[i] = torch.cat(input_images[i])
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 224 but got size 175 for tensor number 9 in the list.

Looking forward to the paper

Hi! Thank you for your exciting work! I am looking forward to the paper! Could you please provide the arxiv link of the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.