Code Monkey home page Code Monkey logo

deraindrop's Introduction

Attentive Generative Adversarial Network for Raindrop Removal from A Single Image (CVPR'2018)

Rui Qian, Robby T.Tan, Wenhan Yang, Jiajun Su and Jiaying Liu

[Paper Link] [Project Page] [Slides](TBA)[Video](TBA) (CVPR'18 Spotlight)

Abstract

Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. Apart from raindrop removal, this injection of visual attention to both generative and discriminative networks is another contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.

If you find the resource useful, please cite the following :- )

@InProceedings{Qian_2018_CVPR,
author = {Qian, Rui and Tan, Robby T. and Yang, Wenhan and Su, Jiajun and Liu, Jiaying},
title = {Attentive Generative Adversarial Network for Raindrop Removal From a Single Image},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

Prerequisites:

  1. Linux
  2. Python 2.7
  3. NVIDIA GPU + CUDA CuDNN (CUDA 8.0)

Installation:

  1. Clone this repo
  2. Install PyTorch and dependencies from http://pytorch.org

Note: the code is suitable for PyTorch 0.3.1)

Demo

The demo pictures are put under the directory ./demo/input/ and ./demo/output/ is a sample of the ouput of the model. If you want to generate your own, use the following code:

CUDA_VISIBLE_DEVICES=gpu_id python predict.py --mode demo --input_dir ./demo/input/ --output_dir ./demo/output/

Dataset

The whole dataset can be find here:

https://drive.google.com/open?id=1e7R76s6vwUJxILOcAsthgDLPSnOrQ49K

####Training Set:

861 image pairs for training.

####Testing Set A:

For quantitative evaluation where the alignment of image pairs is good. A subset of testing set B.

####Testing Set B:

239 image pairs for testing.

Testing

For quantitative evaluation:

Put the test_a dataset under the DeRaindrop directory, and run:

CUDA_VISIBLE_DEVICES=gpu_id python predict.py --mode test --input_dir ./test_a/data/ --gt_dir ./test_a/gt/

Note: Due to the mask of some information such as vehicle license plate number, the PSNR drop a little (0.06) compared to the result in the paper.

For qualitative evaluation:

Just similar to the demo. Just change the input directory and output directory as you like.

Contact

If you have questions, you can contact [email protected].

deraindrop's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deraindrop's Issues

How to Get the Ground Truth?

Hello! Thank you for releasing the code. I don't know how you get the ground truth of a picture with raindrop. Could you tell me how you did it?Thank you!

Confusion of the loss function

hi!
I wonder whether the first term of Eq.(9) will become larger, when the output O of Generator is closed to target after a lot of epochs , which means it is hard for Discriminator to infer attention map from O. So, does the loss function in Eq.(9) work after several epochs?

请问用的什么GPU

谢谢作者的工作,我用1080的GPU运行demo都超出内存了,请问用的什么GPU训练和测试呢?谢谢

can you release some training details?

Hey Rui,

Your work is interesting and amazing!!!
Could you please release some training details like batch size, input size?
I guess you input the whole image into the net without down-sampling or cropping.

Best,
Zewei

How to Get the mask M?

Hello! Thank you for releasing the code. I don't know how you get the binary mask M in the following formula. Could you tell me how you did it?Thank you!
M

Differences

Hi I want to tell you 2 thing to update.
1- You need to update your readme.md file. Your datasets are invalid now. So you need to change or add a new orientation like:

  • Change the value of argument "baseroot" in file "./test.sh" to the path of testing data. Be carefull when yo loading your own images. The image loader looks at baseroot/small/rain and baseroot/small/unrain. Corresponding images under this files need to be have same name!

This is for 3'rd point of testing.

2- You need to add these to the for loop at 87. line, validation.py because of the difference of datatype of height_origin and width_origin;

        height_origin = int(height_origin.item())
        width_origin = int(width_origin.item())

mistake in discriminator.py?

the code in discriminator.py(68-70):
“”
x = self.conv7(x * mask)
x = self.conv7(x)
x = self.conv8(x)
“”
is "x = self.conv7(x)" extra?you can't run the code with “x = self.conv7(x)” in it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.