Comments (14)
you need to train a StyleGAN on full body images or use a pre-trained one
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.
from dualstylegan.
you need to train a StyleGAN on full body images or use a pre-trained one Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.
for full body train,i have two question:
-
how to load pretrained model, i find (style human) .pkl but our's .pt?
when i Direct loading ,i have a error:
-
for trained images size, (style human need size 512 x 1024 ),but our's need 1024x1024 image?
from dualstylegan.
- It's module error, not model error
- you need to change the code of stylegan to support 512*1024
from dualstylegan.
- It's module error, not model error
- you need to change the code of stylegan to support 512*1024
i find this issue ,so i think this is model's error ,when i pip install torch-utils,and have this error
magic_number = pickle_module.load(f, **pickle_load_args)
ModuleNotFoundError: No module named 'torch_utils.persistence'
rosinality/stylegan2-pytorch#250
from dualstylegan.
I see...
Our backbone is rosinality's stylegan2-pytorch, and can only load the model in its format.
So you need to transform the model format from others' framework (tensorflow or official pytorch) to rosinality's stylegan2-pytorch framework (I have no idea how to do that and I'm afraid I cannot help)
Or you can retrain your own rosinality's stylegan2-pytorch on your target dataset.
from dualstylegan.
you can retrain your own rosinality's stylegan2-pytorch on your target dataset.
You mean that I need to train a stylegan model using my own whole body dataset(Anime full body picture) under the stylegan2-pytorch project,The existing pretrain model is not used,in other words, 【you need to train a StyleGAN on full body images or use a pre-trained one
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.】,The first method you suggested.
if i use rosinality's stylegan2-pytorch trained a body stylegan, at least how many datasets(full body cartoon images) do i need,and how long time , GPUs count?
from dualstylegan.
you need to train a StyleGAN on full body images.
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.
Please refer to stylegan for the amount, time and GPU.
from dualstylegan.
you need to train a StyleGAN on full body images.
Then you need to train DualStyleGAN based on this StyleGAN on full body cartoon images.Please refer to stylegan for the amount, time and GPU.
May I can directly train the anime full body figure StyleGAN and finally use it for DualStyleGAN? So I don't have to run Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings?
from dualstylegan.
I don't know which one is better.
You can try both to find out.
from dualstylegan.
I don't know which one is better. You can try both to find out.
Just on a 3090 test 1024 x1024 can not run, the subsequent modification of the resolution to 256 x256 to run,I feel like I'm training too hard,It is not certain that stylegan-human pretrained conversion is easy to implement
from dualstylegan.
I see... Our backbone is rosinality's stylegan2-pytorch, and can only load the model in its format. So you need to transform the model format from others' framework (tensorflow or official pytorch) to rosinality's stylegan2-pytorch framework (I have no idea how to do that and I'm afraid I cannot help) Or you can retrain your own rosinality's stylegan2-pytorch on your target dataset.
I found a conversion code(https://github.com/dvschultz/stylegan2-ada-pytorch/blob/main/export_weights.py), but the missing parameters after the conversion, is it because only the generated parameters (G_ema parameters) are converted, other parameters (G and D parameters are not converted)?
rosinality/stylegan2-pytorch#206 (comment)
from dualstylegan.
You can simply load G with G_ema's parameters.
I have no idea on how to convert the D parameters.
You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.
from dualstylegan.
You can simply load G with G_ema's parameters. I have no idea on how to convert the D parameters. You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.
thank you very much for your help,I have used G_ema conversion code to achieve the conversion of G, now only the parameter conversion of D is left, I have also asked the question, waiting for the reply, can I convert by myself (refer to G_ema code), I am not sure whether it is difficult for me?
On another question, I have two questions that I don't quite understand. Could you please give me some guidance,How to use pix2pixHD to stylize the whole image I found he was generating the image based on the mask
williamyang1991/VToonify#36 (comment)
from dualstylegan.
You can simply load G with G_ema's parameters. I have no idea on how to convert the D parameters. You'd better raise a new issue in https://github.com/dvschultz/stylegan2-ada-pytorch/ to seek for solutions in that issue instead of in this issue.
新年快乐,大佬,我想知道我假如训练了一个stylegan 的生成全身人物模型,那么接下来该如何继续训练,因为我发现后续有很多与人脸相关模型(Psp 模型 encoder 以及 (人脸相关的模型)https://github.com/TreB1eN/InsightFace_Pytorch ,以及后续人脸检测以及对齐操作),这些都如何处理呢,期待大佬解疑
from dualstylegan.
Related Issues (20)
- Why " W+ " encoder giving fully different style transformation ? HOT 23
- How to use this code in mac OS and building in IOS Device ? HOT 1
- 卡通化的人物可以保留人物的真实感吗? HOT 1
- Why dose the destylization process work? HOT 3
- 灌篮高手风格模型 HOT 1
- """Add --wplus in style_transfer.py to use original w+ pSp encoder rather than z+.""" integrate in demo of Hf spaces HOT 3
- Toon images HOT 2
- integrate stable diffusion pretrained models
- How is this picture drawn? HOT 1
- Memory issue while toonifying the image using "VToonify" but not while style transfering the image using "DualStyleGAN" why ?
- AttributeError: module 'model.stylegan' has no attribute 'lpips' HOT 1
- RuntimeError: mat1 dim 1 must match mat2 dim 0 HOT 1
- RuntimeError: input must be contiguous HOT 2
- pre-train ConditionalDiscriminator HOT 1
- 生成图片的脖子有痕迹 HOT 1
- Error occured when pretraining dualstylegan HOT 2
- Train with w-plus pSp encoder. HOT 2
- the gpu memory usage of finetuning dualstylegan on 8 gpus HOT 2
- Apart from Human images
- Freeze/crash in web demo on reconstruct_face()
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dualstylegan.