Code Monkey home page Code Monkey logo

twostagehdr_ntire21's Introduction

Two-stage LDR to HDR Image Reconstruction

This is the official implementation of paper title "A Two-stage Deep Network for High Dynamic Range Image Reconstruction". The paper has been accepted and expected to be published in the proceedings of CVPRW21. To download full paper [Click Here].

Please consider to cite this paper as follows:

@inproceedings{a2021two,
  title={A two-stage deep network for high dynamic range image reconstruction},
  author={Sharif, SMA and Naqvi, Rizwan Ali and Biswas, Mithun and Kim, Sungjun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={550--559},
  year={2021}
}

Overview

Overview

Figure: Overview of the proposed method. The proposed method comprises a two-stage deep network. Stage-I aims toperform image enhancement task denoising, exposure correction, etc. Stage-II of the proposed method intends to performtone mapping and bit-expansion.

Comparison with state-of-the-art sigle-shot LDR to HDR Deep methods

Overview

Figure: Quantitative comparison between proposed method and existing learning-based single-shot LDR to HDR methods..

Prerequisites

Python 3.8
CUDA 10.1 + CuDNN
pip
Virtual environment (optional)

Installation

Please consider using a virtual environment to continue the installation process.

git clone https://github.com/sharif-apu/twostageHDR_NTIRE21.git
cd twostageHDR_NTIRE21
pip install -r requirement.txt

Training

To download the training images please visit the following link: [Click Here] and extract the zip files in common directory.
The original paper used image patches from HdM HDR dataset. To extract image patches please execute Extras/processHDMDHR.py script from the root directory as follows:

python processHDMDHR.py -r path/to/HdM/root/ -t path/to/save/patch -p 256
Here -r flag defines your root directory of the HdM HDR training samples, -s flag defines the directory where patches should be saved, and -p flag defines the patch size


After extracting patch please execute the following commands to start training:

python main.py -ts -e X -b Y To specify your trining images path, go to mainModule/config.json and update "trainingImagePath" entity.
You can specify the number of epoch with -e flag (i.e., -e 5) and number of images per batch with -b flag (i.e., -b 24).

Please Note: The provided code aims to train only with medium exposure frames. To train with short/long exposure frames, you need to modify the dataTools/customDataloader (line 68) and mainModule/twostageHDR (line 87).

For transfer learning execute:
python main.py -tr -e -b

Testing

The provided weights are trained as per rule of NTIRE21 HDR challange (single frame). To download the testing images please visit the following link: [Click Here]

To inference with provided pretrained weights please execute the following commands:
python main.py -i -s path/to/inputImages -d path/to/outputImages
Here,-s specifies the root directory of the source images (i.e., testingImages/), and -d specifies the destination root (i.e., modelOutput/).

LDR52 Dataset

We have collected a LDR dataset captured with different camera hardwares. Feel free to use our LDR dataset in your respective work. The dataset can be downloaded from the following link: [Click Here]

Contact

For any further query, feel free to contact us through the following emails: [email protected], [email protected], or [email protected]

twostagehdr_ntire21's People

Contributors

sharif-apu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

twostagehdr_ntire21's Issues

error while running

Traceback (most recent call last):
File "C:\Users\MyPC\Downloads\twostageHDR_NTIRE21-master\twostageHDR_NTIRE21-master\main.py", line 37, in
twostageHDR(config).modelInference(options.sourceDir, options.resultDir)
File "C:\Users\MyPC\Downloads\twostageHDR_NTIRE21-master\twostageHDR_NTIRE21-master\mainModule\twostageHDR.py", line 290, in modelInference
testImageList = modelInference.testingSetProcessor()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MyPC\Downloads\twostageHDR_NTIRE21-master\twostageHDR_NTIRE21-master\utilities\inferenceUtils.py", line 128, in testingSetProcessor
testSetName = t.split("/")[-2]
~~~~~~~~~~~~^^^^
IndexError: list index out of range

pretrained load error

RuntimeError: Error(s) in loading state_dict for ResMKHDR:
Unexpected key(s) in state_dict: "attention1.fc.0.weight", "attention1.fc.2.weight", "noiseGate1.weight", "noiseGate1.bias", "noiseGate2.weight", "noiseGate2.bias", "attention2.fc.0.weight", "attention2.fc.2.weight".

about align_ratio

What does the value of align_ratio in the training dataset mean? How did you get that?

Ask a running code error problem

Hello, when I run this project code in your way, there is a problem. I don't know how to solve it. Can you help me:

ai p main.py -i -s /source/code/SuperResolution/twostageHDR_NTIRE21/images -d /source/d3/super_resolution/model/twostageHDR_NTIRE21/outputimage
Traceback (most recent call last):
File "main.py", line 37, in
twostageHDR(config).modelInference(options.sourceDir, options.resultDir)
File "/source/code/SuperResolution/twostageHDR_NTIRE21/mainModule/twostageHDR.py", line 70, in init
self.attentionNet = ResMKHDR().to(self.device)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 443, in to
return self._apply(convert)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 203, in _apply
module._apply(fn)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 225, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 441, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 149, in _lazy_init
_check_driver()
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 47, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

about train data formats

Hi,please,RAW image is dng format, can be directly used for training?How does the code read dng images?

Dataset link

The Dataset links are not proper, please provide proper links.

The provided weights are trained as per rule of NTIRE21 HDR challange (single frame). To download the testing images please visit the following link: [Click Here]
:https://drive.google.com/drive/u/0/folders/1vX4rM_953pAk83vNeWheiOiLzlnysZe9

LDR52 Dataset
We have collected a LDR dataset captured with different camera hardwares. Feel free to use our LDR dataset in your respective work. The dataset can be downloaded from the following link: [Click Here]

Please check those and provide the correct ones as soon as possible.

Testing

Hello, you had an excellent job here so congrats!

I have couple of questions about for testing and working on your model.

Firstly, I want to ask you about testing phase. Because of the competition founders keep private the test set, do you know any other way to test this model? I tried to test it but results were very different from yours.

Moreover, do you think I can train this model with my own dataset? I understand the align ratio usage here but now sure how to get align ratios of my own dataset. Hence, want to ask if it it possible to working on different datasets without align ratios too.

I am looking forward your answers and really want to work on and learn this model of yours!
Thanks in advance!

the dataset can not be download

hi, It is very appreciate for your fancy work!

Now I am working in this task while the dataset in the link you provided was not worked for me:

I registered in the competition page and get the information below:

"Your request to participate in this challenge has been received and a decision is pending."

Do you have any suggestions about next step?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.