Code Monkey home page Code Monkey logo

pisr's Introduction

PyTorch implementation of PISR

This is an official implementation of the paper "Learning with Privileged Information for Efficient Image Super-Resolution", accepted to ECCV2020.

This work effectively boosts the performance of FSRCNN by exploiting a distillation framework, treating HR images as privileged information.

For more information, checkout the project site [website] and the paper [PDF].

Overview of our framework

no_image

Getting started

Dependencies

Docker

We provide a Dockerfile to reproduce our work easily.

$ docker build -t pisr:latest . # or docker pull wonkyunglee/pytorch_pisr:latest
$ docker run -it -v <working_dir>:/data -w /data pisr:latest /bin/bash

Datasets

  • For training and validation
    • DIV2K
  • For evaluation
    • Set5
    • Set14
    • B100
    • Urban100

Please download DIV2K dataset from here and other benchmark datasets from here.

After download all datasets, the folder data should be like this:

    data
    ├── benchmark
    │   ├── B100
    │   ├── Set14
    │   ├── Set5
    │   └── Urban100
    │       ├── HR
    │       └── LR_bicubic
    │           ├── X2
    │           ├── X3
    │           └── X4
    │      
    └── DIV2K
        ├── DIV2K_train_HR
        └── DIV2K_train_LR_bicubic
            ├── X2
            ├── X3
            └── X4

Training

First, clone our github repository.

$ git clone https://github.com/yonsei-cvlab/PISR.git

To train our teacher model, run the following script.

$ python step1_train_teacher.py --config configs/fsrcnn/step1.yml

To train our student model, run the following script.

$ python step2_train_student.py --config configs/fsrcnn/step2.yml

Using the pretrained models

  • Download pre-trained weights for teacher model into results/fsrcnn/fsrcnn_teacher/checkpoint/ folder.
    Link: [weights]
  • Download pre-trained weights for student model into results/fsrcnn/fsrcnn_student/checkpoint/ folder.
    Link: [weights]

Evaluation

To evaluate our student model, run following script. Benchmark datasets can be choosed by editing the config file configs/fsrcnn/base.ram.yml.

$ python evaluate.py --config configs/fsrcnn/step2.yml

Citation

@inproceedings{lee2020pisr,
    author={Lee, Wonkyung and Lee, Junghyup and Kim, Dohyung and Ham, Bumsub},
    title={Learning with Privileged Information for Efficient Image Super-Resolution},
    booktitle={Proceedings of European Conference on Computer Vision},
    year={2020},
}

Credit

Some parts of this code (e.g., data_loader) are based on EDSR-PyTorch repository.

pisr's People

Contributors

wonkyunglee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pisr's Issues

对模型的疑问?

作者您好?为啥您的模型输入要求图片维度为1呢?为什么不直接输入3维的彩色图像呢?仅输入Y通道是不是为了能够取得更好的PSNR、SSIM值?那直接输入3维度图像效果如何呢?

In step2_train_student.py

In step2_train_student.py, you only set the teacher_model into eval type, I wonder whether the teacher model is trained while loss.backward(). Since the eval() only freeze the BN layer and Dropout layer, but the require_grad is still True.

encoder

Thank you for share your work,Thank you very much! But I only find fsrcnn's model, I want know how to train my own model?use the same encoder? Thank you very much.

About Teacher Network

Hi, Thank you for this great work.
I am a bit confused with Teacher mode. why the Teacher model needs LR images.

Training help

Hello, when I found this kind of error during training, how should I modify the code? Thank you
image

step1_train_teacher.py", line 123

Traceback (most recent call last):
File "step1_train_teacher.py", line 242, in
main()
File "step1_train_teacher.py", line 236, in main
run(config)
File "step1_train_teacher.py", line 204, in run
writer, visualizer, last_epoch+1)
File "step1_train_teacher.py", line 153, in train
eval_type='val')
File "step1_train_teacher.py", line 123, in evaluate_single_epoch
avg_loss = total_loss / (i+1)
UnboundLocalError: local variable 'i' referenced before assignment

an issue about the results of paper model.

hello, i have an issue is about the compared model of FSRCNN(origin), and your model of FSRCNN training by PISR.
the origin model of fsrcnn is trained without div2k, but your model has.
so, maybe your method is no good as your paper shows, because if the origin model of fsrcnn is trained with the same trainset like yours, it will better than before, right?
image
if i am wrong, could you tell me why, please?
thanks you very much!

Encounter an unexpected error during training

Thank you for your brilliant work and sharing this code!

After I construct the 'data' folder as required, I encounter an unexpected error during training:

Traceback (most recent call last):
File "step2_train_student.py", line 274, in
main()
File "step2_train_student.py", line 268, in main
run(config)
File "step2_train_student.py", line 229, in run
dataloaders = {'train':get_train_dataloader(config, get_transform(config)),
File "/home/lyh/PISR/datasets/dataset_factory.py", line 33, in get_train_dataloader
dataset = get_train_dataset(config, transform)
File "/home/lyh/PISR/datasets/dataset_factory.py", line 23, in get_train_dataset
name=name, train=True, transform=transform,
File "/home/lyh/PISR/datasets/dataset.py", line 20, in init
begin, end)
File "/home/lyh/PISR/datasets/base_dataset.py", line 107, in init
self._init_repeat()
File "/home/lyh/PISR/datasets/base_dataset.py", line 143, in _init_repeat
assert n_images != 0
AssertionError

How can I fix this? Thank you so much for any possible solutions!

where is the Student network initialized with teacher network parameter?

Hi

Thanks for your code.

I'm reading the code with your paper and I saw that you stated initializing a student network with parameters from

teacher network helped training faster.

But I couldn't see where the code is.

May be it's because I'm not so familiar with pytorch.

It'll very helpful if you tell me where the code initializing student network parameter is.

Can we scale up 8 times??

Hi, thanks for providing a code and wonderful research.

This model upscale 2, 3, 4 times. Can we set this model to 8 times scale up?

train student network

when I train the student model and run 'python step2_train_student.py --config configs/fsrcnn/step2.yml', it returns the error that RuntimeError: CUDA out of memory. Tried to allocate 1.57 GiB (GPU 0; 10.76 GiB total capacity; 4.81 GiB already allocated; 1.21 GiB free; 8.77 GiB reserved in total by PyTorch). I reduce the batchsize to 4 in step2.yaml and use 4 gtx1080 Ti. The error still exists. Could you give me some suggestions?

losses

Hi

Thanks for your code.

I have another question about loss function.

loss in the paper is written as :

스크린샷 2020-09-05 오후 10 53 03

but I see the loss in the code as :
스크린샷 2020-09-05 오후 10 52 37

I wonder if these two equations are the same.

"torch.log(2std) + numerator / (std)" this part is almost the same with one in the paper
but why "mu.shape[1] * np.log(2
math.pi)/2" term exists?

sorry for many questions but It'll save a lot of my time if you answer me ;)
thanks in advance.

How to get the subjective visual results after the testing dataset is inferred?

Hello!
First of all, thank you for being able to open source your project, this project is great.
When I was reproducing your code, I did not find the test code that can get the subjective visual results. May I ask where to set the display or save the subjective visual results after inference in the project?
Looking forward to your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.