Code Monkey home page Code Monkey logo

dualstylegan's Introduction

DualStyleGAN - Official PyTorch Implementation

This repository provides the official PyTorch implementation for the following paper:

Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy
In CVPR 2022.
Project Page | Paper | Supplementary Video

Abstract: Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path, respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.

Features:
High-Resolution (1024) | Training Data-Efficient (~200 Images) | Exemplar-Based Color and Structure Transfer

Updates

  • [02/2023] Add --wplus in style_transfer.py to use original w+ pSp encoder rather than z+.
  • [09/2022] Pre-trained models in three new styles (feat. StableDiffusion) are released.
  • [07/2022] Source code license is updated.
  • [03/2022] Paper and supplementary video are released.
  • [03/2022] Web demo is created.
  • [03/2022] Code is released.
  • [03/2022] This website is created.

Web Demo

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo: Hugging Face Spaces or Hugging Face Spaces

Installation

Clone this repo:

git clone https://github.com/williamyang1991/DualStyleGAN.git
cd DualStyleGAN

Dependencies:

All dependencies for defining the environment are provided in environment/dualstylegan_env.yaml. We recommend running this repository using Anaconda:

conda env create -f ./environment/dualstylegan_env.yaml

We use CUDA 10.1 so it will install PyTorch 1.7.1 (corresponding to Line 22, Line 25, Line 26 of dualstylegan_env.yaml). Please install PyTorch that matches your own CUDA version following https://pytorch.org/.

☞ Install on Windows: here and here

(1) Dataset Preparation

Cartoon, Caricature and Anime datasets can be downloaded from their official pages. We also provide the script to build new datasets.

Dataset Description
Cartoon 317 cartoon face images from Toonify.
Caricature 199 images from WebCaricature. Please refer to dataset preparation for more details.
Anime 174 images from Danbooru Portraits. Please refer to dataset preparation for more details.
Fantasy 137 fantasy face images generated by StableDiffusion.
Illustration 156 illustration face images generated by StableDiffusion.
Impasto 120 impasto face images generated by StableDiffusion.
Other styles Please refer to dataset preparation for the way of building new datasets.

(2) Inference for Style Transfer and Artistic Portrait Generation

Inference Notebook


To help users get started, we provide a Jupyter notebook found in ./notebooks/inference_playground.ipynb that allows one to visualize the performance of DualStyleGAN. The notebook will download the necessary pretrained models and run inference on the images found in ./data/.

If no GPU is available, you may refer to Inference on CPU, and set device = 'cpu' in the notebook.

Pretrained Models

Pretrained models can be downloaded from Google Drive or Baidu Cloud (access code: cvpr):

Model Description
encoder Pixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 Z+ latent code
encoder_wplus Original Pixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 W+ latent code
cartoon DualStyleGAN and sampling models trained on Cartoon dataset, 317 (refined) extrinsic style codes
caricature DualStyleGAN and sampling models trained on Caricature dataset, 199 (refined) extrinsic style codes
anime DualStyleGAN and sampling models trained on Anime dataset, 174 (refined) extrinsic style codes
arcane DualStyleGAN and sampling models trained on Arcane dataset, 100 extrinsic style codes
comic DualStyleGAN and sampling models trained on Comic dataset, 101 extrinsic style codes
pixar DualStyleGAN and sampling models trained on Pixar dataset, 122 extrinsic style codes
slamdunk DualStyleGAN and sampling models trained on Slamdunk dataset, 120 extrinsic style codes
fantasy DualStyleGAN models trained on Fantasy dataset, 137 extrinsic style codes
illustration DualStyleGAN models trained on Illustration dataset, 156 extrinsic style codes
impasto DualStyleGAN models trained on Impasto dataset, 120 extrinsic style codes

The saved checkpoints are under the following folder structure:

checkpoint
|--encoder.pt                     % Pixel2style2pixel model
|--encoder_wplus.pt               % Pixel2style2pixel model (optional)
|--cartoon
    |--generator.pt               % DualStyleGAN model
    |--sampler.pt                 % The extrinsic style code sampling model
    |--exstyle_code.npy           % extrinsic style codes of Cartoon dataset
    |--refined_exstyle_code.npy   % refined extrinsic style codes of Cartoon dataset
|--caricature
    % the same files as in Cartoon
...

Exemplar-Based Style Transfer

Transfer the style of a default Cartoon image onto a default face:

python style_transfer.py 

The result cartoon_transfer_53_081680.jpg is saved in the folder .\output\, where 53 is the id of the style image in the Cartoon dataset, 081680 is the name of the content face image. An corresponding overview image cartoon_transfer_53_081680_overview.jpg is additionally saved to illustrate the input content image, the encoded content image, the style image (* the style image will be shown only if it is in your folder), and the result:

Specify the style image with --style and --style_id (find the mapping between id and filename here, find the visual mapping between id and the style image here). Specify the filename of the saved images with --name. Specify the weight to adjust the degree of style with --weight. The following script generates the style transfer results in the teaser of the paper.

python style_transfer.py
python style_transfer.py --style cartoon --style_id 10
python style_transfer.py --style caricature --name caricature_transfer --style_id 0 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style caricature --name caricature_transfer --style_id 187 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style anime --name anime_transfer --style_id 17 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style anime --name anime_transfer --style_id 48 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1

Specify the content image with --content. If the content image is not well aligned with FFHQ, use --align_face. For preserving the color style of the content image, use --preserve_color or set the last 11 elements of --weight to all zeros.

python style_transfer.py --content ./data/content/unsplash-rDEOVtE7vOs.jpg --align_face --preserve_color \
       --style arcane --name arcane_transfer --style_id 13 \
       --weight 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 1 1 1 1 1 1 1 

Specify --wplus to use the original pSp encoder to extract the W+ intrinsic style code, which may better preserve the face features of the content image.

Remarks: Our trained pSp encoder on Z+/W+ space cannot perfectly encode the content image. If the style transfer result more consistent with the content image is desired, one may use latent optimization to better fit the content image or using other StyleGAN encoders (as discussed in #11 and #29).

More options can be found via python style_transfer.py -h.

Artistic Portrait Generation

Generate random Cartoon face images (Results are saved in the ./output/ folder):

python generate.py 

Specify the style type with --style and the filename of the saved images with --name:

python generate.py --style arcane --name arcane_generate

Specify the weight to adjust the degree of style with --weight.

Keep the intrinsic style code, extrinsic color code or extrinsic structure code fixed using --fix_content, --fix_color and --fix_structure, respectively.

python generate.py --style caricature --name caricature_generate --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 --fix_content

More options can be found via python generate.py -h.


(3) Training DualStyleGAN

Download the supporting models to the ./checkpoint/ folder:

Model Description
stylegan2-ffhq-config-f.pt StyleGAN model trained on FFHQ taken from rosinality.
model_ir_se50.pth Pretrained IR-SE50 model taken from TreB1eN for ID loss.

Facial Destylization

Step 1: Prepare data. Prepare the dataset in ./data/DATASET_NAME/images/train/. First create lmdb datasets:

python ./model/stylegan/prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH

For example, download 317 Cartoon images into ./data/cartoon/images/train/ and run

python ./model/stylegan/prepare_data.py --out ./data/cartoon/lmdb/ --n_worker 4 --size 1024 ./data/cartoon/images/

Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings:

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_stylegan.py --batch BATCH_SIZE \
       --ckpt FFHQ_MODEL_PATH --iter ITERATIONS --style DATASET_NAME --augment LMDB_PATH

Take the cartoon dataset for example, run (batch size of 8*4=32 is recommended)

python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_stylegan.py --iter 600 --batch 4 --ckpt ./checkpoint/stylegan2-ffhq-config-f.pt --style cartoon --augment ./data/cartoon/lmdb/

The fine-tuned model can be found in ./checkpoint/cartoon/finetune-000600.pt. Intermediate results are saved in ./log/cartoon/.

Step 3: Destylize artistic portraits.

python destylize.py --model_name FINETUNED_MODEL_NAME --batch BATCH_SIZE --iter ITERATIONS DATASET_NAME

Take the cartoon dataset for example, run:

python destylize.py --model_name finetune-000600.pt --batch 1 --iter 300 cartoon

The intrinsic and extrinsic style codes are saved in ./checkpoint/cartoon/instyle_code.npy and ./checkpoint/cartoon/exstyle_code.npy, respectively. Intermediate results are saved in ./log/cartoon/destylization/. To speed up destylization, set --batch to large value like 16. For styles severely different from real faces, set --truncation to small value like 0.5 to make the results more photo-realistic (it enables DualStyleGAN to learn larger structrue deformations).

Progressive Fine-Tuning

Stage 1 & 2: Pretrain DualStyleGAN on FFHQ. We provide our pretrained model generator-pretrain.pt at Google Drive or Baidu Cloud (access code: cvpr). This model is obtained by:

python -m torch.distributed.launch --nproc_per_node=1 --master_port=8765 pretrain_dualstylegan.py --iter 3000 --batch 4 ./data/ffhq/lmdb/

where ./data/ffhq/lmdb/ contains the lmdb data created from the FFHQ dataset via ./model/stylegan/prepare_data.py.

Stage 3: Fine-Tune DualStyleGAN on Target Domain. Fine-tune DualStyleGAN in distributed settings:

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_dualstylegan.py --iter ITERATIONS \ 
                          --batch BATCH_SIZE --ckpt PRETRAINED_MODEL_PATH --augment DATASET_NAME

The loss term weights can be specified by --style_lossFM), --CX_lossCX), --perc_lossperc), --id_lossID) and --L2_reg_lossreg). λID and λreg are suggested to be tuned for each style dataset to achieve ideal performance. More options can be found via python finetune_dualstylegan.py -h.

Take the Cartoon dataset as an example, run (multi-GPU enables a large batch size of 8*4=32 for better performance):

python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_dualstylegan.py --iter 1500 --batch 4 --ckpt ./checkpoint/generator-pretrain.pt --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 --augment cartoon

The fine-tuned models can be found in ./checkpoint/cartoon/generator-ITER.pt where ITER = 001000, 001100, ..., 001500. Intermediate results are saved in ./log/cartoon/. Large ITER has strong cartoon styles but at the cost of artifacts, and users may select the most balanced one from 1000-1500. We use 1400 for our paper experiments.

(optional) Latent Optimization and Sampling

Refine extrinsic style code. Refine the color and structure styles to better fit the example style images.

python refine_exstyle.py --lr_color COLOR_LEARNING_RATE --lr_structure STRUCTURE_LEARNING_RATE DATASET_NAME

By default, the code will load instyle_code.npy, exstyle_code.npy, and generator.pt in ./checkpoint/DATASET_NAME/. Use --instyle_path, --exstyle_path, --ckpt to specify other saved style codes or models. Take the Cartoon dataset as an example, run:

python refine_exstyle.py --lr_color 0.1 --lr_structure 0.005 --ckpt ./checkpoint/cartoon/generator-001400.pt cartoon

The refined extrinsic style codes are saved in ./checkpoint/DATASET_NAME/refined_exstyle_code.npy. lr_color and lr_structure are suggested to be tuned to better fit the example styles.

Training sampling network. Train a sampling network to map unit Gaussian noises to the distribution of extrinsic style codes:

python train_sampler.py DATASET_NAME

By default, the code will load refined_exstyle_code.npy or exstyle_code.npy in ./checkpoint/DATASET_NAME/. Use --exstyle_path to specify other saved extrinsic style codes. The saved model can be found in ./checkpoint/DATASET_NAME/sampler.pt.


(4) Results

Exemplar-based cartoon style trasnfer

cartoon.mp4

Exemplar-based caricature style trasnfer

caricature.mp4

Exemplar-based anime style trasnfer

anime.mp4

Other styles

Combine DualStyleGAN with State-of-the-Art Diffusion model

We use StableDiffusion to generate face images of the specified style of famous artists. Trained with these images, DualStyleGAN is able to pastiche these famous artists and generates appealing results.

Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{yang2022Pastiche,
  title={Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer},
  author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change},
  booktitle={CVPR},
  year={2022}
}

Acknowledgments

The code is mainly developed based on stylegan2-pytorch and pixel2style2pixel.

dualstylegan's People

Contributors

ak391 avatar endlesssora avatar eun0 avatar williamyang1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dualstylegan's Issues

conda - cuda + pytorch out of date

inetune_dualstylegan.py pretrain_dualstylegan.py
(dualstylegan_env) ➜ DualStyleGAN git:(main) ✗ python style_transfer.py
/home/jp/miniconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/cuda/init.py:104: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

When will destylization finish

I am experimenting DualStyleGAN with my custom dataset. I have 283 images and I trained StyleGAN for 600 iterations. Then, I started to destylize it. I set 300 iterations and 4 batch. I expected for it to end when it became 283/283 but now it continues.
[448/283] Lperc: 1.254; Lnoise: 0.000; LID: 0.100; Lreg: 0.315; lr: 0.000: 100% Also, I'm not sure if the destylization outputs are as they should be.
5833 (1)
7629

How do I train dualstyleGAN to carry inference out on the entire body?

Any clues on how to train/modify the input so that I can carry out inference on the entire body. I'm planning to use this on something like DCT-Net here
Screenshot 2022-12-02 at 18-23-13 menyifang_DCT-Net Official implementation of DCT-Net Domain-Calibrated Translation for Portrait Stylization SIGGRAPH 2022 (TOG) Multi-style cartoonization

I have tried to resize the image to 1024/1024 but using a resized image gives some other results. Do I need to train it on different images or do I need to make changes to the code itself?

Huge confusion in Reconstructed

Hello, I am very interested in your job.
However, when I tried to use the model you provided, I found that the img_rec is very different from the original image. I don't know whether it is caused by the psp.encoder or the stylegan.generator.
cartoon_transfer_23_dddd_overview
Especially after using Webdemo, I found that the same input, Reconstructed on webdemo is very good.
{afb46084-7ab5-4d6a-9098-3b7c472e7848}
So I have a huge confusion, do I need to retrain my psp.encoder to get a better instyle_code or do I need to do other work to solve the difficulties I am currently encountering.

The results aren't coming out well

cartoon_transfer_53_081680_overview

As shown in the picture above, the result of style_transfer.py does not come out properly. Why is that?

The output of the terminal is as follows

(dualstylegan_env) [lucass@nipa2019-0211 DualStyleGAN]$ python style_transfer.py
/home/lucass/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py:266: UserWarning:

!! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.

See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!! WARNING !!

warnings.warn(WRONG_COMPILER_WARNING.format(
/home/lucass/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py:266: UserWarning:

!! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.

See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!! WARNING !!

warnings.warn(WRONG_COMPILER_WARNING.format(
Load options
align_face: False
content: ./data/content/081680.jpg
data_path: ./data/
exstyle_name: exstyle_code.npy
model_name: generator.pt
model_path: ./checkpoint/
name: cartoon_transfer
output_path: ./output/
preserve_color: False
style: cartoon
style_id: 2
truncation: 0.75
weight: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]


Loading pSp from checkpoint: ./checkpoint/encoder.pt
Load models successfully!
Generate images successfully!
Save images successfully!
(dualstylegan_env) [lucass@nipa2019-0211 DualStyleGAN]$

creating environment failed

(base) PS C:\deepdream-test\DualStyleGAN> conda env create -f ./environment/dualstylegan_env.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • ca-certificates==2022.2.1=h06a4308_0
  • libgcc-ng==9.3.0=h2828fa1_19
  • faiss==1.7.1=py38h7b17aaf_0_cpu
  • libedit==3.1.20191231=he28a2e2_2
  • python-lmdb==1.2.1=py38h2531618_1
  • matplotlib-base==3.3.4=py38h62a2d02_0
  • libffi==3.2.1=he1b5a44_1007
  • certifi==2021.10.8=py38h06a4308_2
  • _libgcc_mutex==0.1=conda_forge
  • scikit-image==0.18.1=py38ha9443f7_0
  • libfaiss==1.7.1=hb573701_0_cpu
  • python==3.8.3=cpython_he5300dc_0
  • setuptools==49.6.0=py38h578d9bd_3
  • pytorch==1.7.1=py3.8_cuda10.1.243_cudnn7.6.3_0
  • libstdcxx-ng==9.3.0=h6de172a_19
  • pillow==8.3.1=py38h2c7a002_0

How the identity is kept as much as possible?

Hello Researcher Yang.
Thank you for your excellent work!!! I followed your training pretty well and kept my style great! But the identity and structure change a lot, do you have any better solution for the difference in identity due to PSP inverse mapping? Or which part in between you need to fine tune according to the dataset, please give me some hints, thanks!
image

Training fails

Hi, I am training my own dataset on Colab follwing the steps of Readme, but the training fails in the second step of Facial destylization : "Step 2: Fine-tune StyleGAN". The error information is as followed:

load model: ./checkpoint/stylegan2-ffhq-config-f.pt
0%| | 0/600 [00:00<?, ?it/s]
Traceback (most recent call last):
File "finetune_stylegan.py", line 391, in
train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device)
File "finetune_stylegan.py", line 115, in train
real_img = next(loader)
File "/content/DualStyleGAN/util.py", line 58, in sample_data
for batch in loader:
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 681, in next
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/DualStyleGAN/model/stylegan/dataset.py", line 37, in getitem
img = Image.open(buffer)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2657, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x7f070291c410>
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2003) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 193, in
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/run.py", line 755, in run
)(*cmd_args)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launcher/api.py", line 247, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

finetune_stylegan.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2022-11-23_07:27:01
host : a3b13d7b3fb3
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2003)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Data preparation

Hello!

I am following the data preparation part to curate my own custom dataset, and I realized that face alignment and crop could be important. I saw additional files in the caricature and anime dataset such as txt file to pinpoint facial landmarks. It looks like the additional files were provided by other datasets, so I wonder if there are other tools or models I could utilize to get such additional information.

In this regard, I also wonder what preprocessing steps you used when testing the images created by Stable Diffusion.

Thank you!

Dataset_Preparation

hi, there was something doubt for me that I had got only 4 face_images with a video more than 1h(109Mb). Could you please to give some suggestion?

About reproduce Anime style?

Hi, thanks for sharing code.
I download the Anime dataset according to the README.md, but I get bad result like this:
image

Can you provide more details about trainning Anime dataset.
Here are some of my training results, is that correct?I used your provided pretrain model generator-pretrain.pt so I skip stage I&II, and fintune dualstylegan(stage III) using parameters as peper:
image
image
The picture above is fintune-004800.png,I use one GPU, so I trained 8*600=4800 with batch=4
image
The picture above is destylization log picture.
image
The picture above is dualstylegan-002000.jpg
Hoping for hearing from you soon.

Something wrong in execution

Hi, thanks for sharing code.
I am a newcomer in the field of style transfer. I follow the README.md, but I get bad result like this:
1
I want to know where the problem is.
2
Hoping for hearing from you soon.

why the ResBlock is added in the Discriminator rather than Generator?

Thank you for this amazing work!
I have a question about fine-tuning of the StyleGAN, which is the step 2 of Facial destylization.
image

While I reading your paper, I intuitively think that the residual path(ResBlock) are added in Generator. But, you actually added in the Discriminator. Why is that?

Training with custom data

Hello!

I would like to train DualStyleGAN with my own data which is a particular style of human face datasets such as Pixar or Arcane.
I collected around 300 images and did all the preprocessing such as alignment and super-resolution.

Considering my domain (face) is similar to what you experimented with in your paper, I am not sure I could directly utilize your pertained model. For example, could I skip facial destylization and start with stage 3 in progressive fine-tuning? or should I start with step 3 in facial destylization?

It would be great if you could explain custom dataset training sequentially.
Thank you!

inference on cpu

I followed your document to inference on cpu, but i found a small error

in DualStyleGAN/model/stylegan/op_cpu/conv2d_gradfix.py

it keep gets name error

you forgot to import contextlib!!

Check and Edit please

And thank you for your great work, I really enjoy using it

How can we change encoder?

Hello! Thank you for your work!

I'm wondering how I can replace the encoder in the current implementation with https://github.com/eladrich/pixel2style2pixel encoder. First image "this implementation", Second image "pixel2style2pixel" implementation.
Screenshot 2022-03-31 at 14 55 00
Screenshot 2022-03-31 at 14 55 49

If I submit a latent vector from pixel2style2pixel, I get differentoutput.
Screenshot 2022-03-31 at 14 58 26

But it is written in the README that it is possible to replace the encoder.

Thanks for your help!

convert_weight from .pkl to .pt

I noticed that your stylegan2-ffhq-config-f.pt is taken from rosinality.
Have you ever used the convert_weight.py in the rosinality?

I just used the convert_ weight. py to convert generator_yellow-stylegan2-config-f.pkl to .pt file, but there is a KeyError occurs when loading .pt. Can you tell me which key in the .pt corresponds to which key in the .pkl? Especially the 'g_ema'. Or something else was wrong?
image
image
image

Figure 1 is the BUG.
Figure 2 is your finetune_stylegan.py.
Figure 3 is the official stylegan2's run_training.py based on TensorFlow.

Custom Dataset Experiment

Hi, While training all components of dual stylegan on my custom dataset I noticed that the script destylize.py like other scripts does not have the option to pass in custom paths for encoder and ckpt, also the output size of the gan. I did manage to change this according to my case but still had a doubt. My psp encoder was trained with a moco loss rather than ID loss since it is a non human domain. Should, I change that also to moco or will I obtain reasonable style codes with ID loss also.

Questions about stage 1 & 2 of Progressive Fine-Tuning

I have some questions about stage 1 & 2 of Pretrain DualStyleGAN on FFHQ.
Could you help me?

  1. Could I use the pretrained model you provided (showing below) on several different dataset that I collected them myself.
    image

  2. I only find out stage 2 codes in pretrain_dualstylegan.py. Could you showd me codes of Stage 1 in pretrain_dualstylegan.py?

  3. I am not understand stage1 very well from paper. In stage 1, you trained color transform blocks with style mixing FFHQ images?
    The purple hair in following figure you showed it on purpose. In most cases, it is regular hair color (black, yellow, ...) when we train model in stage1, right?
    image

  4. In stage 2, you mentioned you sampled z2~ from {z2,E(g(z2))}. Could I know why you do it this way instead of only z2 or E(g(z2)).
    image

Looking forward to your reply

style transfer my own artistic portrait generating distorted images

Hi, I followed the training step in Readme and apply the style_transfer.py on content face image but get distorted images, could you help me check the reason.

The script I run

python ./model/stylegan/prepare_data.py --out ./data/portrait/lmdb/ --n_worker 4 --size 1024 ./data/portrait/images/

python -m torch.distributed.launch --nproc_per_node=2 --master_port=8765 finetune_stylegan.py --iter 600 --batch 4 --ckpt ./checkpoint/stylegan2-ffhq-config-f.pt --style portrait --augment ./data/portrait/lmdb/

python destylize.py --model_name finetune-000600.pt --truncation 0.5 --batch 1 --iter 300 portrait

python -m torch.distributed.launch --nproc_per_node=2 --master_port=8765 finetune_dualstylegan.py --iter 1500 --batch 4 --ckpt ./checkpoint/generator-pretrain.pt --style_loss 0.25 --CX_loss 0.25 --perc_loss 0.75 --id_loss 0.75 --L2_reg_loss 0.02 --augment portrait

python refine_exstyle.py --lr_color 0.05 --lr_structure 0.1 --ckpt ./checkpoint/portrait/generator-001400.pt portrait

python style_transfer.py --style portrait --style_id 1 --model_name generator-001400.pt

this is the result image, and the third is the target style portrait, how can I fix the distortion or how to debug it?
image

And a second question, when I input my own face image but get a big change with respect to the expression and eyes, lips shape, how to skip this reconstruction stage and style-transforming my own face directly?

reconstruction image without align_face
image

reconstruction image with align_face
image

Progressive Fine-Tuning Stage 1 & 2

In Progressive Fine-Tuning Stage 1 & 2, you didn't use any style dataset or fine-tuned stylegan on style dataset, why does the result have kind of extrinsic style?

pixar dataset

Hi! Amazing work!
I want to ask that do you mind share your pixar dataset?(not the youtube link...)

Error during learning process

I found this error while running finetune_dualstylegan.py in the last step of learning.
I changed it to jpg because I thought the format of the picture might be a problem, but there must be a problem.
I would appreciate it if you could tell me how to solve it.

load model: /content/drive/MyDrive/generator-pretrain.pt
Loading pSp from checkpoint: /content/drive/MyDrive/encoder.pt
Loading ResNet ArcFace
Encoder model successfully loaded!
Traceback (most recent call last):
File "finetune_dualstylegan.py", line 529, in
Simg = transform2(Simg).unsqueeze(dim=3)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 67, in call
img = t(img)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 226, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py", line 284, in normalize
tensor.sub
(mean).div_(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

Questions about the pSp encoder with Z+

  1. Could you share how you modified pSp encoder with Z+?
    I could not find the details in the paper and codes.
    It seems you didn't use VAE encoder of the AgileGAN and also didn't inference Z mean and Z standard.
    The modification of pSp encoder is only passing output (Z+) of pSp encdoer to mapping layer indviually? Am I right?
  2. How did you train the pSp encoder with Z+? Or you used the pretrained pSp encoder model directly?
    Looking forward to your reply.

Unable to train with custom dataset

Hi,
Thanks for putting out this work.
I am unable to train with my custom images dataset. The training is failing in the second step of Facial destylization : "Step 2: Fine-tune StyleGAN".

This is the same as an issue reported earlier by @dongyun-kim-arch.

I have given the error message below.

load model: ./checkpoint/stylegan2-ffhq-config-f.pt
0%| | 0/1 [00:00<?, ?it/s]/home/azureuser/DualStyleGAN_dataPrep/DualStyleGAN/model/stylegan/op/conv2d_gradfix.py:88: UserWarning: conv2d_gradfix not supported on PyTorch 1.12.1+cu102. Falling back to torch.nn.functional.conv2d().
warnings.warn(
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "finetune_stylegan.py", line 391, in
train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device)
File "finetune_stylegan.py", line 159, in train
r1_loss = d_r1_loss(real_pred, real_img)
File "/home/azureuser/DualStyleGAN_dataPrep/DualStyleGAN/util.py", line 71, in d_r1_loss
grad_real, = autograd.grad(
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/autograd/init.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/autograd/function.py", line 253, in apply
return user_fn(self, *args)
File "/home/azureuser/DualStyleGAN_dataPrep/DualStyleGAN/model/stylegan/non_leaking.py", line 352, in backward
grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid)
File "/home/azureuser/DualStyleGAN_dataPrep/DualStyleGAN/model/stylegan/non_leaking.py", line 361, in forward
grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
TypeError: 'tuple' object is not callable
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 99134) of binary: /home/azureuser/anaconda3/envs/dualstylegan_env/bin/python3
Traceback (most recent call last):
File "/home/azureuser/anaconda3/envs/dualstylegan_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/azureuser/anaconda3/envs/dualstylegan_env/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/azureuser/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

finetune_stylegan.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2022-11-11_12:08:03
host : asr-training.internal.cloudapp.net
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 99134)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

input must be contiguous when autograd

When I finetune DualStyleGAN, when d_r1_loss(real_pred, real_img), it call 'upfirdn2d.py', and RunError:

File "/home/tongtong/python_project/DualStyleGAN/model/stylegan/op/upfirdn2d.py", line 143, in backward
ctx.out_size,
File "/home/tongtong/python_project/DualStyleGAN/model/stylegan/op/upfirdn2d.py", line 42, in forward
g_pad_y1,
RuntimeError: input must be contiguous

I have tried to add .contiguous() after the input, but it doesn't work. What should I do?
Thanks in advance.

Problem with training

Hello,

I faced this problem when fine-tuning stylegen model. I am not sure what cause this error, either dualstylegan or stylegen, but it would be great if someone could have a look at it.

Traceback (most recent call last):
File "finetune_stylegan.py", line 14, in
from util import data_sampler, requires_grad, accumulate, sample_data, d_logistic_loss, d_r1_loss, g_nonsaturating_loss, g_path_regularize, make_noise, mixing_noise, set_grad_none
File "/home/donghyun/Desktop/training/DualStyleGAN/util.py", line 10, in
from model.stylegan.op import conv2d_gradfix
File "/home/donghyun/Desktop/training/DualStyleGAN/model/stylegan/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/home/donghyun/Desktop/training/DualStyleGAN/model/stylegan/op/fused_act.py", line 11, in
fused = load(
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 969, in load
return _jit_compile(
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1196, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1547, in _import_module_from_library
return imp.load_module(module_name, file, path, description)
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: /home/donghyun/.cache/torch_extensions/fused/fused.so: undefined symbol: _ZN3c104cuda20getCurrentCUDAStreamEa
Traceback (most recent call last):
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/home/donghyun/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/distributed/launch.py", line 255, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/home/donghyun/anaconda3/envs/dualstylegan_env/bin/python', '-u', 'finetune_stylegan.py', '--local_rank=0', '--iter', '600', '--batch', '4', '--ckpt', './checkpoint/stylegan2-ffhq-config-f.pt', '--style', 'mystyle', '--augment', './data/mystyle/lmdb/']' returned non-zero exit status 1.

An environmental problem

Hi~ thanks for your amazing work. I try to follow your work but I am currently experiencing some problems.
Based on experience, I guess it should be an environmental problem.
My enviroment:
pytorch - 1.3.1
torchvision - 1.7.1
cuda - 10.1
cudnn - 7605
Hope to get your help. Many Thanks!

Traceback (most recent call last):
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
subprocess.run(
File "
/anaconda3/envs/dualstylegan_env/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "style_transfer.py", line 4, in
from util import save_image, load_image
File "./DualStyleGAN-main/util.py", line 10, in
from model.stylegan.op import conv2d_gradfix
File "./DualStyleGAN-main/model/stylegan/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "./DualStyleGAN-main/model/stylegan/op/fused_act.py", line 11, in
fused = load(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 986, in load
return _jit_compile(
File "
/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1193, in jit_compile
write_ninja_file_and_build_library(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1297, in _write_ninja_file_and_build_library
_run_ninja_build(
File "
/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/2] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/TH -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/THC -isystem ~/anaconda3/envs/dualstylegan_env/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c ~/DualStyleGAN-main/model/stylegan/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/TH -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/THC -isystem ~/anaconda3/envs/dualstylegan_env/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c ~/DualStyleGAN-main/model/stylegan/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
nvcc fatal : Value 'c++14' is not defined for option 'std'
ninja: build stopped: subcommand failed.

An environmental problem

Hi~ thanks for your amazing work. I try to follow your work but I am currently experiencing some problems.
Based on experience, I guess it should be an environmental problem.
My enviroment:
pytorch - 1.7.1
torchvision - 0.8.2
cuda - 10.1
cudnn - 7605
Hope to get your help. Many Thanks!

Traceback (most recent call last):
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
subprocess.run(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "style_transfer.py", line 4, in
from util import save_image, load_image
File "./DualStyleGAN-main/util.py", line 10, in
from model.stylegan.op import conv2d_gradfix
File "./DualStyleGAN-main/model/stylegan/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "./DualStyleGAN-main/model/stylegan/op/fused_act.py", line 11, in
fused = load(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 986, in load
return jit_compile(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1193, in jit_compile
write_ninja_file_and_build_library(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1297, in write_ninja_file_and_build_library
run_ninja_build(
File "/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/2] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/TH -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/THC -isystem ~/anaconda3/envs/dualstylegan_env/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c ~/DualStyleGAN-main/model/stylegan/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/TH -isystem ~/anaconda3/envs/dualstylegan_env/lib/python3.8/site-packages/torch/include/THC -isystem ~/anaconda3/envs/dualstylegan_env/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c ~/DualStyleGAN-main/model/stylegan/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
nvcc fatal : Value 'c++14' is not defined for option 'std'
ninja: build stopped: subcommand failed.

Browse styles visually

Hi!

I suspect that users trying the demo will have a magic moment when they choose a style they love, combined with an image of themselves, and then generate a pleasing image as a result. With the current slider interface for style selection, this is not really possible.
I think providing a way for users to visually browse the styles and choose one they like would help chances of this tech becoming very popular and adopted across industry.
Fantastic work!

add web demo/model to Huggingface

Hi, would you be interested in adding DualStyleGAN to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. We can create a username similar to github to add the models/spaces/datasets to.

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/akhaliq/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

Do not change the shape of the face

Hello! Can you tell me if there is some way to configure the inference so that it does not change the shape of the face as in the photo below?
Unknown_input
Screenshot 2022-03-31 at 18 42 01

style_transfer.py (Renew...)

import argparse
import os
from argparse import Namespace

import numpy as np
import torch
import torchvision
from torch.nn import functional as F
from torchvision import transforms

from model.dualstylegan import DualStyleGAN
from model.encoder.psp import pSp
from util import save_image, load_image

from PIL import Image

class TestOptions():
def init(self):

    self.parser = argparse.ArgumentParser(description="Exemplar-Based Style Transfer")

    # Image path and file name to be transferred
    pic_path = './data/content/'
    pic_name = 'wowowo.jpeg'

    self.parser.add_argument("--content", type=str, default=pic_path + pic_name,
                             help="path of the content image")

    # Style setting / selection
    style_types = ['cartoon', 'caricature', 'anime', 'arcane', 'comic', 'pixar', 'slamdunk']
    # style_type : the default is 'cartoon',that's style_types[0]
    style_type = style_types[1]

    # style_id = [Cartoons]:(0-316)   [caricature]:(0-198)   [anime]:(0-173)
    #            [arcane]:(0-99)   [comic]:(0-100)   [pixar]:(0-121)   [slamdunk]:(0-119)
    # note : the value of style_id is an integer.
    #        For the portrait style comparison table, please refer to ./doc_images directory

    style_id = 52

    self.parser.add_argument("--style", type=str, default=style_type, help="target style type")

    self.parser.add_argument("--style_id", type=int, default=style_id, help="the id of the style image")


    self.parser.add_argument("--truncation", type=float, default=0.75,
                             help="truncation for intrinsic style code (content)")
    self.parser.add_argument("--weight", type=float, nargs=18, default=[0.75] * 7 + [1] * 11,
                             help="weight of the extrinsic style")

    # File header (including style name)
    self.parser.add_argument("--name", type=str, default=style_type+'_transfer',
                              help="filename to save the generated images")

    self.parser.add_argument("--preserve_color", action="store_true",
                             help="preserve the color of the content image")
    self.parser.add_argument("--model_path", type=str, default='./checkpoint/', help="path of the saved models")
    self.parser.add_argument("--model_name", type=str, default='generator.pt',
                             help="name of the saved dualstylegan")
    self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output images")
    self.parser.add_argument("--data_path", type=str, default='./data/', help="path of dataset")
    self.parser.add_argument("--align_face", action="store_true", help="apply face alignment to the content image")
    self.parser.add_argument("--exstyle_name", type=str, default=None, help="name of the extrinsic style codes")

    # python style_transfer.py --content ./data/content/unsplash-rDEOVtE7vOs.jpg --align_face --preserve_color \
    #  --style arcane --name arcane_transfer --style_id 13 \
    #  --weight 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 1 1 1 1 1 1 1

def parse(self):
    self.opt = self.parser.parse_args()
    if self.opt.exstyle_name is None:
        if os.path.exists(os.path.join(self.opt.model_path, self.opt.style, 'refined_exstyle_code.npy')):
            self.opt.exstyle_name = 'refined_exstyle_code.npy'
        else:
            self.opt.exstyle_name = 'exstyle_code.npy'
    args = vars(self.opt)
    print('Load options')
    for name, value in sorted(args.items()):
        print('%s: %s' % (str(name), str(value)))
    return self.opt

def run_alignment(args):
import dlib
from model.encoder.align_all_parallel import align_face
modelname = os.path.join(args.model_path, 'shape_predictor_68_face_landmarks.dat')
if not os.path.exists(modelname):
import wget, bz2
wget.download('http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2', modelname + '.bz2')
zipfile = bz2.BZ2File(modelname + '.bz2')
data = zipfile.read()
open(modelname, 'wb').write(data)
predictor = dlib.shape_predictor(modelname)
aligned_image = align_face(filepath=args.content, predictor=predictor)
return aligned_image

if name == "main":

# device = "cuda" # Change to CPU running  9-10-2022 
# At the same time, modify the 11 lines in the root directory util.py
# from model.stylegan.op import conv2d_gradfix
# To:
# from model.stylegan.op_cpu import conv2d_gradfix
#
# At the same time, modify the 11 lines in model.py in the model/stylegan directory
# from model.stylegan.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix
# To:
# from model.stylegan.op_cpu import FusedLeakyReLU,fused_leaky_relu, upfirdn2d, conv2d_gradfix

# device = "cuda"
device = "cpu"

parser = TestOptions()
args = parser.parse()
print('*' * 98)

# Determine the size of the input picture
# print(args.content)
# print(os.path.basename(args.content).split('.')[0] + '.' + os.path.basename(args.content).split('.')[1])

img = Image.open(args.content)
imgSize = img.size
w = img.width
h = img.height
f = img.format
if h != 1024:
    # print('Picture size:' + str(imgSize))
    # print('Picture size:Width ' + str(w), 'Height' + str(h), f)
    out_file = args.data_path + 'content/w_' + os.path.basename(args.content).split('.')[0] + '.' + os.path.basename(args.content).split('.')[1]
    # print(out_file)

    width = img.width
    height = img.height
    new_width = int(1024 * width / height)
    # print(new_width)
    out = img.resize((new_width, 1024), Image.Resampling.LANCZOS)
    out.save(out_file)
# else:
#   print('The picture is normal without modification')

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
])

generator = DualStyleGAN(1024, 512, 8, 2, res_index=6)
generator.eval()

# ckpt = torch.load(os.path.join(args.model_path, args.style, args.model_name), map_location=lambda storage, loc: storage)
ckpt = torch.load(os.path.join(args.model_path, args.style, args.model_name), map_location=device)

generator.load_state_dict(ckpt["g_ema"])
generator = generator.to(device)

model_path = os.path.join(args.model_path, 'encoder.pt')
ckpt = torch.load(model_path, map_location=device)
opts = ckpt['opts']
opts['checkpoint_path'] = model_path
opts = Namespace(**opts)
opts.device = device
encoder = pSp(opts)
encoder.eval()
encoder.to(device)

exstyles = np.load(os.path.join(args.model_path, args.style, args.exstyle_name), allow_pickle='TRUE').item()

print('Load models successfully!')

with torch.no_grad():
    viz = []
    # load content image
    if args.align_face:
        I = transform(run_alignment(args)).unsqueeze(dim=0).to(device)
        I = F.adaptive_avg_pool2d(I, 1024)
    else:
        I = load_image(args.content).to(device)
    viz += [I]

    # reconstructed content image and its intrinsic style code
    img_rec, instyle = encoder(F.adaptive_avg_pool2d(I, 256), randomize_noise=False, return_latents=True,
                               z_plus_latent=True, return_z_plus_latent=True, resize=False)
    img_rec = torch.clamp(img_rec.detach(), -1, 1)
    viz += [img_rec]

    stylename = list(exstyles.keys())[args.style_id]
    latent = torch.tensor(exstyles[stylename]).to(device)
    if args.preserve_color:
        latent[:, 7:18] = instyle[:, 7:18]
    # extrinsic styte code
    exstyle = generator.generator.style(latent.reshape(latent.shape[0] * latent.shape[1], latent.shape[2])).reshape(
        latent.shape)

    # load style image if it exists
    S = None
    if os.path.exists(os.path.join(args.data_path, args.style, 'images/train', stylename)):
        S = load_image(os.path.join(args.data_path, args.style, 'images/train', stylename)).to(device)
        viz += [S]

    # style transfer 
    # input_is_latent: instyle is not in W space
    # z_plus_latent: instyle is in Z+ space
    # use_res: use extrinsic style path, or the style is not transferred
    # interp_weights: weight vector for style combination of two paths
    img_gen, _ = generator([instyle], exstyle, input_is_latent=False, z_plus_latent=True,
                           truncation=args.truncation, truncation_latent=0, use_res=True,
                           interp_weights=args.weight)
    img_gen = torch.clamp(img_gen.detach(), -1, 1)
    viz += [img_gen]

print('Generate images successfully!')


save_name = args.name + '_%d_%s' % (args.style_id, os.path.basename(args.content).split('.')[0])
# Generate preview effect picture
#save_image(torchvision.utils.make_grid(F.adaptive_avg_pool2d(torch.cat(viz, dim=0), 256), 4, 2).cpu(),
#           os.path.join(args.output_path, save_name + '_overview.jpg'))
# Generate final effect picture
save_image(img_gen[0].cpu(), os.path.join(args.output_path, save_name + '.jpg'))

print('Save images successfully!')

if os.path.exists(out_file):
    os.remove(out_file)

Question about the dataset split

Thank you for your great work!

I have a question about the dataset split for the training and test. For example, the cartoon dataset has 317 images. To my understanding, these 317 images are used in the training and test at the same time. Did I understand correctly?

Loading pSp from checkpoint: ./checkpoint/encoder.pt Killed

系统是ubuntu 20.04,用的是腾讯去的服务器,没有显卡,把device改为了cpu
运行不成功,请问什么原因

Load options
align_face: False
content: ./data/content/081680.jpg
data_path: ./data/
exstyle_name: refined_exstyle_code.npy
model_name: generator.pt
model_path: ./checkpoint/
name: cartoon_transfer
output_path: ./output/
preserve_color: False
style: cartoon
style_id: 53
truncation: 0.75
weight: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]


Loading pSp from checkpoint: ./checkpoint/encoder.pt
Killed

not enough space on the disk (but 280gb available)

i'm getting this error when building my dataset
it says i don't have enough space on my disk but i have 280gb available
i'm running on windows 10 by the way

c:\deepdream-test\DualStyleGAN>python ./model/stylegan/prepare_data.py --out ./data/psychedelic/lmdb/ --n_worker 1 --size 1024 ./data/psychedelic/images/
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Make dataset of image sizes: 1024
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\prepare_data.py", line 104, in <module>
    with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env:
lmdb.Error: ./data/psychedelic/lmdb/: There is not enough space on the disk.

if i just run the second line of code i get this error

c:\deepdream-test\DualStyleGAN>python ./model/stylegan/prepare_data.py --out ./data/psychedelic/lmdb/ --n_worker 1 --size 1024 ./data/psychedelic/images/
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Make dataset of image sizes: 1024
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\prepare_data.py", line 104, in <module>
    with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env:
lmdb.Error: ./data/psychedelic/lmdb/: There is not enough space on the disk.


c:\deepdream-test\DualStyleGAN>python ./model/stylegan/prepare_data.py --out ./data/psychedelic/lmdb/ --n_worker 1 --size 1024 ./data/psychedelic/images/
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Make dataset of image sizes: 1024
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\prepare_data.py", line 104, in <module>
    with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env:
lmdb.Error: ./data/psychedelic/lmdb/: There is not enough space on the disk.


c:\deepdream-test\DualStyleGAN>python -m torch.distributed.launch --nproc_per_node=1 --master_port=8765 finetune_stylegan.py --iter 600 --batch 16 --ckpt ./checkpoint/stylegan2-ffhq-config-f.pt --style psychedelic --augment ./data/psychedelic/lmdb/
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
NOTE: Redirects are currently not supported in Windows or MacOs.
C:\python\lib\site-packages\torch\distributed\launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
 Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
  entrypoint       : finetune_stylegan.py
  min_nodes        : 1
  max_nodes        : 1
  nproc_per_node   : 1
  run_id           : none
  rdzv_backend     : static
  rdzv_endpoint    : 127.0.0.1:8765
  rdzv_configs     : {'rank': 0, 'timeout': 900}
  max_restarts     : 3
  monitor_interval : 5
  log_dir          : None
  metrics_cfg      : {}

INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: c:\AXIOM~2.FFM\ffmpeg\bin\_00CE~1\torchelastic_7xqlov8k\none_zvc0_8u4
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python.exe
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
C:\python\lib\site-packages\torch\distributed\elastic\utils\store.py:52: FutureWarning: This is an experimental API and will be changed in future.
  warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
  restart_count=0
  master_addr=127.0.0.1
  master_port=8765
  group_rank=0
  group_world_size=1
  local_ranks=[0]
  role_ranks=[0]
  global_ranks=[0]
  role_world_sizes=[1]
  global_world_sizes=[1]

INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: c:\AXIOM~2.FFM\ffmpeg\bin\_00CE~1\torchelastic_7xqlov8k\none_zvc0_8u4\attempt_0\0\error.json
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Load options
ada_every: 256
ada_length: 500000
ada_target: 0.6
augment: True
augment_p: 0
batch: 16
channel_multiplier: 2
ckpt: ./checkpoint/stylegan2-ffhq-config-f.pt
d_reg_every: 16
g_reg_every: 4
iter: 600
local_rank: 0
lr: 0.002
mixing: 0.9
model_path: ./checkpoint/
n_sample: 9
path: ./data/psychedelic/lmdb/
path_batch_shrink: 2
path_regularize: 2
r1: 10
save_every: 10000
size: 1024
style: psychedelic
wandb: False
**************************************************************************************************
load model: ./checkpoint/stylegan2-ffhq-config-f.pt
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\finetune_stylegan.py", line 380, in <module>
    dataset = MultiResolutionDataset(args.path, transform, args.size)
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\dataset.py", line 23, in __init__
    self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8'))
AttributeError: 'NoneType' object has no attribute 'decode'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 14500) of binary: C:\python\python.exe
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 3/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
  restart_count=1
  master_addr=127.0.0.1
  master_port=8765
  group_rank=0
  group_world_size=1
  local_ranks=[0]
  role_ranks=[0]
  global_ranks=[0]
  role_world_sizes=[1]
  global_world_sizes=[1]

INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: c:\AXIOM~2.FFM\ffmpeg\bin\_00CE~1\torchelastic_7xqlov8k\none_zvc0_8u4\attempt_1\0\error.json
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Load options
ada_every: 256
ada_length: 500000
ada_target: 0.6
augment: True
augment_p: 0
batch: 16
channel_multiplier: 2
ckpt: ./checkpoint/stylegan2-ffhq-config-f.pt
d_reg_every: 16
g_reg_every: 4
iter: 600
local_rank: 0
lr: 0.002
mixing: 0.9
model_path: ./checkpoint/
n_sample: 9
path: ./data/psychedelic/lmdb/
path_batch_shrink: 2
path_regularize: 2
r1: 10
save_every: 10000
size: 1024
style: psychedelic
wandb: False
**************************************************************************************************
load model: ./checkpoint/stylegan2-ffhq-config-f.pt
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\finetune_stylegan.py", line 380, in <module>
    dataset = MultiResolutionDataset(args.path, transform, args.size)
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\dataset.py", line 23, in __init__
    self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8'))
AttributeError: 'NoneType' object has no attribute 'decode'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28776) of binary: C:\python\python.exe
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
  restart_count=2
  master_addr=127.0.0.1
  master_port=8765
  group_rank=0
  group_world_size=1
  local_ranks=[0]
  role_ranks=[0]
  global_ranks=[0]
  role_world_sizes=[1]
  global_world_sizes=[1]

INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: c:\AXIOM~2.FFM\ffmpeg\bin\_00CE~1\torchelastic_7xqlov8k\none_zvc0_8u4\attempt_2\0\error.json
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Load options
ada_every: 256
ada_length: 500000
ada_target: 0.6
augment: True
augment_p: 0
batch: 16
channel_multiplier: 2
ckpt: ./checkpoint/stylegan2-ffhq-config-f.pt
d_reg_every: 16
g_reg_every: 4
iter: 600
local_rank: 0
lr: 0.002
mixing: 0.9
model_path: ./checkpoint/
n_sample: 9
path: ./data/psychedelic/lmdb/
path_batch_shrink: 2
path_regularize: 2
r1: 10
save_every: 10000
size: 1024
style: psychedelic
wandb: False
**************************************************************************************************
load model: ./checkpoint/stylegan2-ffhq-config-f.pt
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\finetune_stylegan.py", line 380, in <module>
    dataset = MultiResolutionDataset(args.path, transform, args.size)
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\dataset.py", line 23, in __init__
    self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8'))
AttributeError: 'NoneType' object has no attribute 'decode'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 24204) of binary: C:\python\python.exe
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 1/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
  restart_count=3
  master_addr=127.0.0.1
  master_port=8765
  group_rank=0
  group_world_size=1
  local_ranks=[0]
  role_ranks=[0]
  global_ranks=[0]
  role_world_sizes=[1]
  global_world_sizes=[1]

INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: c:\AXIOM~2.FFM\ffmpeg\bin\_00CE~1\torchelastic_7xqlov8k\none_zvc0_8u4\attempt_3\0\error.json
C:\python\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\python\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
C:\python\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
Load options
ada_every: 256
ada_length: 500000
ada_target: 0.6
augment: True
augment_p: 0
batch: 16
channel_multiplier: 2
ckpt: ./checkpoint/stylegan2-ffhq-config-f.pt
d_reg_every: 16
g_reg_every: 4
iter: 600
local_rank: 0
lr: 0.002
mixing: 0.9
model_path: ./checkpoint/
n_sample: 9
path: ./data/psychedelic/lmdb/
path_batch_shrink: 2
path_regularize: 2
r1: 10
save_every: 10000
size: 1024
style: psychedelic
wandb: False
**************************************************************************************************
load model: ./checkpoint/stylegan2-ffhq-config-f.pt
Traceback (most recent call last):
  File "c:\deepdream-test\DualStyleGAN\finetune_stylegan.py", line 380, in <module>
    dataset = MultiResolutionDataset(args.path, transform, args.size)
  File "c:\deepdream-test\DualStyleGAN\model\stylegan\dataset.py", line 23, in __init__
    self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8'))
AttributeError: 'NoneType' object has no attribute 'decode'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 12700) of binary: C:\python\python.exe
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (FAILED). Waiting 300 seconds for other agents to finish
C:\python\lib\site-packages\torch\distributed\elastic\utils\store.py:70: FutureWarning: This is an experimental API and will be changed in future.
  warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.0005078315734863281 seconds
{"name": "torchelastic.worker.status.FAILED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 0, "group_rank": 0, "worker_id": "12700", "role": "default", "hostname": "adminoracle.home", "state": "FAILED", "total_run_time": 150, "rdzv_backend": "static", "raw_error": "{\"message\": \"<NONE>\"}", "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python.exe\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [1]}", "agent_restarts": 3}}
{"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 0, "worker_id": null, "role": "default", "hostname": "adminoracle.home", "state": "SUCCEEDED", "total_run_time": 151, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python.exe\"}", "agent_restarts": 3}}
C:\python\lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py:354: UserWarning:

**********************************************************************
               CHILD PROCESS FAILED WITH NO ERROR_FILE
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 12700 (local_rank 0) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:

  from torch.distributed.elastic.multiprocessing.errors import record

  @record
  def trainer_main(args):
     # do train
**********************************************************************
  warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
  File "C:\python\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\python\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\python\lib\site-packages\torch\distributed\launch.py", line 173, in <module>
    main()
  File "C:\python\lib\site-packages\torch\distributed\launch.py", line 169, in main
    run(args)
  File "C:\python\lib\site-packages\torch\distributed\run.py", line 621, in run
    elastic_launch(
  File "C:\python\lib\site-packages\torch\distributed\launcher\api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "C:\python\lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 348, in wrapper
    return f(*args, **kwargs)
  File "C:\python\lib\site-packages\torch\distributed\launcher\api.py", line 245, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
***************************************
      finetune_stylegan.py FAILED
=======================================
Root Cause:
[0]:
  time: 2022-04-07_23:20:42
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 12700)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.