Hello! I have a kernel-pool which has 1k kernels of size 33x33 for x4 upscaling. How should the code be modified in order to make sure that we obtain proper estimations? Cause right now in order to obtain the x4 kernel you use FKP_x2 to obtain an estimation and then you do a kernel shift to obtain the x4 one. But this should not work if we would have to bring our own kernels of a different size.
It seems that the alignment between HR and LR generated by your program is off and it could be related to the following code (different from original version). But somehow after USRNet, the alignment is back to normal. This is quite confusing.
We use the LR image as the input to the KernelGAN-FKP, and obtain a super-resolved image without ground-truth kernels, .
It seems that there is pixel offset between the super-resolved image and the original HR image, leading to a very low PSNR of the super-resolved image.
In the following code, lamda_1&2 (referred as kernel width) are used to calculate the COV matrix. But shouldn't the square of lambda be used here for covariance?
I am trying to evaluate KernelGANFKP on Set5 & Set14 & B100 for scaling factor x4: FKP/KernelGANFKP$ python main.py --SR --sf 4 --dataset {dataset}
However, I am hitting an error during the forward pass of the discriminator if the input patch has a height or width < 27. Is KernelGANFKPx4 not supported for Set5, Set14, and B100 datasets?
hi Jingyun,
Thanks for your work. There are too many test code in the project which make me confuesd.
I just want to test on my own image, what should do ?
Thanks again.
Thanks for your work. I have a question about how to train FKP. Because in your code, there are no input datasets. I am very confused about it. Could you help me?