Code Monkey home page Code Monkey logo

mdf's People

Contributors

aamir-mustafa avatar mantiuk avatar mikhailiuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mdf's Issues

High memory consumption, inference and backpropagation time

While I was testing the MDF loss, I found that the loss function uses much more memory than what is proposed in the paper for only one image. What might be the issue?

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
target = torch.randn(1, 3, 256, 256).to(device).requires_grad_(False)
input = torch.randn(1, 3, 256, 256).to(device).requires_grad_(True)
mdf = MDFLoss("./Ds_Denoising.pth", True)
loss = torch.zeros(1).cuda()
loss_style = torch.zeros(1).cuda()
j = 0
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
a = torch.cuda.memory_reserved(0)
t = mdf(input, target)
t.backward()
b = torch.cuda.memory_reserved(0)
start.record()
for i in range(100):
    t = mdf(input, target)
    t.backward()
end.record()

# Waits for everything to finish running
torch.cuda.synchronize()

print(str(start.elapsed_time(end)/100) + "ms")
print(str((b-a)/1e6) + "MB")

The output of the code above is placed here:

236.59349609375ms
1369.440256MB

Torch version: 2.0.1+cu117
Python version: 3.10.6
OS: Ubuntu 22.04.2 LTS x86_64
Kernel: 5.15.0-76-generic
CPU: AMD Ryzen 3 3100 (8) @ 3.600GHz
GPU: NVIDIA GeForce GTX 1050 Ti

Load the weights file for multi-gpu training

Hi! I liked your approach and wanted to try it for SISR task, however, I am facing issues while loading the weights file when I want to train it on multiple GPUs. I guess the torch.load() directly is causing the problem here

mdf/mdfloss.py

Line 10 in 85c9ad3

self.Ds = torch.load(saved_ds_path)

Can you please pass it as a model instance where we can load it as model.load_state_dict(torch.load("...")) ?? That might be helpful if anyone wants to train using multiple GPUs. Also can you please share the opt argparse arguments mentioned in the SinGAN/models.py file??

N = int(opt.nfc)

Please let me know if you have a workaround for this (loading the weights file on multiple GPUs) and it would also be helpful if you could release the training code.

Thanks!

Loading SinGAN

Hey,
I'm trying to load the SISR weights after cloning the project and I got the following error:

ModuleNotFoundError: No module named 'SinGAN'

May the .pth files are courrupted?

Thanks

ModuleNotFoundError: No module named 'SinGAN'

Hello, I was pretty thrilled to attempt your loss after reading your paper, however, when I tried to adopt the MDF loss by using Denoising.pth it didn't work, it keeps giving me this issue, could you please tell me how to use your loss in another model?
image

High memory usage problem

          I have already set the require_grads to False and used torch.no_grad() for the target - for instance y, but the main idea is that it has to compute the grads with respect to each of the layers and inputs by applying the chain rule. So for the task, we have 8 backpropagations at a time - 8 different models. It creates 8 deep copies of the input. I am interested in how you measure the inference and backpropagation time and the memory overhead.

Best regards,
Delyan

Originally posted by @delyan-boychev in #8 (comment)

Do you have the generator weights?

Hey, thank you for this project, well done!
Do you mind sharing your generator checkpoint (those that were trained based on the MDF loss)?
I want to simply run inference on the JPEG artifacts removal and see the results

How to train the discriminators?

Hi, I'm interested in your work "Training a Task-Specific Image Reconstruction Loss". I'm wondering how to train the discriminators used for calculate the MDF loss (the phase 1 mentioned in the paper) ? It seems that the training code is not provided in this repo.

how to define the generator

Dear sir, after I read the code, there are a few questions. How to train the generator mentioned in the paper? And how to define the generator mentioned in this paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.