Code Monkey home page Code Monkey logo

sr-lut's People

Contributors

yhjo09 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sr-lut's Issues

Instead of the function rot90()

I would like to ask if we can change the fill position instead of rotation by rotating the image 3 times on the right and rotating it back to the original position after interpolation. For example, the previous filling on the right side was changed to filling on the upper, lower, and left sides respectively, and then the interception was corresponding to the previous one. I tried not to rotate, but my experiment found that the result seemed to be wrong. Is there any special significance of rotation.
millions of thanks.

Can I train an eightfold upsampling?

In addition to changing the scale parameter during octuple upsampling training, does the network mechanism also need to be modified accordingly?

关于通道

该LUT是单通道的?将输入图像分为三个通道,每个通道都查找LUT?我看到网络结构的输入也是单通道的

Could this LUT model run on GPU?

Hi @yhjo09
I wanna to know that this LUT model(.npy) is run on CPU or GPU?
if it is run on CPU, it is possiable to run on GPU for fast inference?

   Thank you very much! ^_^ 

The method to increase the receptive field

Hi,
The method to increase the receptive filed in this paper, actually, is rotating and padding. I'm trying to use the padding alone to increase the receptive filed, specifcally, padding the original image at (top or down) and (left or right) four times, and ensemble the results. (for example in the picture below, yellow area results will be ensembled). But the performance even obviously worse than the method that do not ensemble.
It's strange, have you tried this method?

image

How to parallel execution in VS2017?

Thank you for your great work at first.

Now we are planing to run SR-LUT on a PC(i7 CPU), we are considering to convert Python project (inference part) to VS2017.

I think there will be no big problem for python converting to C++, but we concern the speed issue.

I studied issue #1,and know that we should use parallel execution to improve the speed. for Android, it is Stream API. But for VS2017 on PC. would you please give me some suggestion on how to do parallel execution ? Is multipe thread a good solution?

Thank you in advance.

Code of mobile app

Hi,

Thank you for sharing your code with the community!
I wanted to ask you if it's possible for you to share the implementation of the mobile application? Also which model is used in this later?

Many thanks in advance!
With best,

Cannot reproduce the psnr value on Set14

Hi, i tested the psnr value on Set14 using the lut provided in this repo Model_S_x4_4bit_int8.npy, and get the value 26.41, which is different from paper that is 27.01. I used python package PIL to generate low resoultion image of Set14 (bicubic mode).

So i'm curious about why there is such a big difference.

Thanks for your sharing, hope for your reply

About Training Code

No training code was found for model_F and model_V. Could you please provide the relevant code?

Why the output of SRLUT is clipped to [-1,1]?

In Train_Model_S.py Line 182-184, each output is clip to [-1, 1], then the batch_S is projected to [-2, 2], but the batch_H belongs to [0,1].

Is this right to compute the L1 loss on [-2, 2] and [0, 1] ?

train stage 1

屏幕截图 2021-10-11 001048(1).png
这个是1_train_model_s阶段,他大概持续了几个小时也没有反应但也没报错请问它在正常工作么?期待您的回复!

About Running Time

We can't achieve these speeds on the test set, which can take hundreds of milliseconds. When testing 512×256 images, the running time reaches tens of seconds. What Settings are required to achieve such a fast speed? My CPU is 12 vCPU Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.