Code Monkey home page Code Monkey logo

Comments (14)

williamyang1991 avatar williamyang1991 commented on May 30, 2024

you need to train a StyleGAN on full body images or use a pre-trained one
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

you need to train a StyleGAN on full body images or use a pre-trained one Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.

for full body train,i have two question:

  1. how to load pretrained model, i find (style human) .pkl but our's .pt?
    when i Direct loading ,i have a error:
    image

  2. for trained images size, (style human need size 512 x 1024 ),but our's need 1024x1024 image?

from dualstylegan.

williamyang1991 avatar williamyang1991 commented on May 30, 2024
  1. It's module error, not model error
  2. you need to change the code of stylegan to support 512*1024

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024
  1. It's module error, not model error
  2. you need to change the code of stylegan to support 512*1024

i find this issue ,so i think this is model's error ,when i pip install torch-utils,and have this error

magic_number = pickle_module.load(f, **pickle_load_args)
ModuleNotFoundError: No module named 'torch_utils.persistence'

rosinality/stylegan2-pytorch#250

from dualstylegan.

williamyang1991 avatar williamyang1991 commented on May 30, 2024

I see...
Our backbone is rosinality's stylegan2-pytorch, and can only load the model in its format.
So you need to transform the model format from others' framework (tensorflow or official pytorch) to rosinality's stylegan2-pytorch framework (I have no idea how to do that and I'm afraid I cannot help)
Or you can retrain your own rosinality's stylegan2-pytorch on your target dataset.

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

you can retrain your own rosinality's stylegan2-pytorch on your target dataset.

You mean that I need to train a stylegan model using my own whole body dataset(Anime full body picture) under the stylegan2-pytorch project,The existing pretrain model is not used,in other words, 【you need to train a StyleGAN on full body images or use a pre-trained one
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.】,The first method you suggested.

if i use rosinality's stylegan2-pytorch trained a body stylegan, at least how many datasets(full body cartoon images) do i need,and how long time , GPUs count?

from dualstylegan.

williamyang1991 avatar williamyang1991 commented on May 30, 2024

you need to train a StyleGAN on full body images.
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.

Please refer to stylegan for the amount, time and GPU.

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

you need to train a StyleGAN on full body images.
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.

Please refer to stylegan for the amount, time and GPU.

May I can directly train the anime full body figure StyleGAN and finally use it for DualStyleGAN? So I don't have to run Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings?

from dualstylegan.

williamyang1991 avatar williamyang1991 commented on May 30, 2024

I don't know which one is better.
You can try both to find out.

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

I don't know which one is better. You can try both to find out.

Just on a 3090 test 1024 x1024 can not run, the subsequent modification of the resolution to 256 x256 to run,I feel like I'm training too hard,It is not certain that stylegan-human pretrained conversion is easy to implement

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

I see... Our backbone is rosinality's stylegan2-pytorch, and can only load the model in its format. So you need to transform the model format from others' framework (tensorflow or official pytorch) to rosinality's stylegan2-pytorch framework (I have no idea how to do that and I'm afraid I cannot help) Or you can retrain your own rosinality's stylegan2-pytorch on your target dataset.

I found a conversion code(https://github.com/dvschultz/stylegan2-ada-pytorch/blob/main/export_weights.py), but the missing parameters after the conversion, is it because only the generated parameters (G_ema parameters) are converted, other parameters (G and D parameters are not converted)?
rosinality/stylegan2-pytorch#206 (comment)

from dualstylegan.

williamyang1991 avatar williamyang1991 commented on May 30, 2024

You can simply load G with G_ema's parameters.
I have no idea on how to convert the D parameters.
You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

You can simply load G with G_ema's parameters. I have no idea on how to convert the D parameters. You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.

thank you very much for your help,I have used G_ema conversion code to achieve the conversion of G, now only the parameter conversion of D is left, I have also asked the question, waiting for the reply, can I convert by myself (refer to G_ema code), I am not sure whether it is difficult for me?

On another question, I have two questions that I don't quite understand. Could you please give me some guidance,How to use pix2pixHD to stylize the whole image I found he was generating the image based on the mask
williamyang1991/VToonify#36 (comment)

from dualstylegan.

zhanghongyong123456 avatar zhanghongyong123456 commented on May 30, 2024

You can simply load G with G_ema's parameters. I have no idea on how to convert the D parameters. You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.

新年快乐,大佬,我想知道我假如训练了一个stylegan 的生成全身人物模型,那么接下来该如何继续训练,因为我发现后续有很多与人脸相关模型(Psp 模型 encoder 以及 (人脸相关的模型)https://github.com/TreB1eN/InsightFace_Pytorch ,以及后续人脸检测以及对齐操作),这些都如何处理呢,期待大佬解疑

from dualstylegan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.