Code Monkey home page Code Monkey logo

adaattn's People

Contributors

ananonymousprogrammer avatar huage001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

adaattn's Issues

About the inference_frame.py in video infererce

Hi , thanks for your code. As for the code in inference_frame.py, the line 110 in class AttnAdaINCos
G_norm = torch.sqrt((G ** 2).sum(1).view(b, 1, -1)) b, _, h, w = F.size() F = F.view(b, -1, w * h) F_norm = torch.sqrt((F ** 2).sum(1).view(b, -1, 1)) F = F.permute(0, 2, 1) S = torch.relu(torch.bmm(F, G) / (F_norm + 1e-5) / (G_norm + 1e-5) + 1) # S: b, n_c, n_s S = S / (S.sum(dim=-1, keepdim=True) + 1e-5)

  1. i think the F_norm and the G_norm should be calculated by sum(2) not sum(1)
  2. why you add the 1e-5 in three places in code?

how to train video style transfer

Hello, I have tried using the pre-training model you provided, and the effect is very good. Is the pre-training model you provided trained on Wikiart? If I want to train in more styles. Can you provide more details on how to train video style transfer? Just like readme said, is it OK to train directly on the coco dataset?

user_control

Hello. Sorry to bother you. Is it convenient for the author to share the relevant code controlled by the user?

officially unofficial

谢谢作者在图像风格转换任务上的杰出工作。介绍里面的“officially unofficial”应该怎么翻译呢?官方的非正式?

video training code

Hi, thanks for the repo and for the video test code posted in the issues!
It would be extremely helpful for the video training script to be posted as well, so we can experiment training on specific datasets.

Thanks anyways

About the adaattn_model.py

Hi , thanks for your code. I have two questions about it.

  1. I don't understand what self.device means here, it doesn't seem to be defined.
    1647223113(1)

  2. In addition, I would like to ask whether local feature loss is taking up too much memory.Because I ran out of memory when I tried to import this module into other code.

Thanks!

datasets

Did you use all the images in the dataset during training? There are many sub files in the data set wikiart. How do you train?

About the loss

Hi, I am interested in this article AdaAttN. The paper only introduced two losses, Lgs and Llf, but why is there a content loss in your code?

About Multi-style transfer

Dear authors of AdaAttN,

Thank you for releasing this project for further reproduction of your excellent work. I would like to know if the 'Multi-style transfer' has been implemented in this version of code? If not, could you please give more details of how can we implement the idea of 'averaging their mean and standard variance maps of different styles'? How can we merge this function into the current codes?

Thank you so much for reading this issue and I look forward for your kind response:)

about the low-level task

Can this be used for color-enhancement task? my training datasets is paired. The difference between images is in brightness, saturation, contrast.

pre-trained model for video style transfer

Thanks for the great work! It looks like this repo currently only hosts code and model for image style transfer. Could you provide a pre-trained model (and perhaps test scripts) for video style transfer as discussed in the paper? Thank you.

Mean-variance-norm and Instance Norm

Hi @Huage001
I read the paper and found that mean variance norm mean 'mean-variance channel-wise norm' works quite like instance norm. Can you explain to me why use mean-variance-norm function instead of instance norm?
Thank you so much.

Q. Traning time

The results of your paper is interesting.

So, I want to train your model from scratch.

Before training, I want to know approximate training time when you trained your model.

Thanks.

About test image

Thanks for your sharing! May I know how can I get the test image? I noted that many papers just only mentioned the training set.

1, Should I randomly change any image to test?
2, But some papers use unified images to show their performance. May I have the same testing image as this paper?

Thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.