Code Monkey home page Code Monkey logo

alias-free-gan-pytorch's People

Contributors

pabloppp avatar rosinality avatar skylion007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alias-free-gan-pytorch's Issues

pretrained model

Dear Rosinality,

Thank you for you great implementation.

Would you mind providing the pretrained models? Maybe for unaligned FFHQ?

Thank you for your help.

Best Wishes,

Alex

GPU memory size issues on Colab?

Hey @rosinality exciting repo!

I'm working on a fork with pytorch lightning for training on tpus but I've hit a roadblock where it's having trouble loading images. So I thought I should try running your training script to make sure it was something I changed about the dataloader.

Switched to a fresh install of your repo to run the test.

Using a colab pro instance with a Tesla P100-PCIE-16GB.

Downloaded a couple pip libraries to get things working (tensorfn, wandb, ninja, and jsonnet). Converted my dataset. Then changed the config file use a size of 256. And ran the following command:

!python train.py --n_gpu 1 --conf /content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/config/config-t-256.jsonnet training.batch=16 path="/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/datasets/painterly-faces-256"

The memory error I'm getting:

Output appended
...
  0% 0/800000 [00:00<?, ?it/s]/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/stylegan2/op/conv2d_gradfix.py:89: UserWarning: conv2d_gradfix not supported on PyTorch 1.9.0+cu102. Falling back to torch.nn.functional.conv2d().
  f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
Traceback (most recent call last):
  File "train.py", line 406, in <module>
    main, conf.n_gpu, conf.n_machine, conf.machine_rank, conf.dist_url, args=(conf,)
  File "/usr/local/lib/python3.7/dist-packages/tensorfn/distributed/launch.py", line 49, in launch
    fn(*args)
  File "train.py", line 399, in main
    train(conf, loader, generator, discriminator, g_optim, d_optim, g_ema, device)
  File "train.py", line 250, in train
    fake_img = generator(noise)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/model.py", line 424, in forward
    out = conv(out, latent)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/model.py", line 303, in forward
    out = self.activation(out)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/model.py", line 258, in forward
    out = fused_leaky_relu(out, negative_slope=self.negative_slope)
  File "/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/stylegan2/op/fused_act.py", line 119, in fused_leaky_relu
    return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
  File "/content/drive/MyDrive/afg-lightning-devel/alias-free-gan-pytorch/stylegan2/op/fused_act.py", line 66, in forward
    out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
RuntimeError: CUDA out of memory. Tried to allocate 1.16 GiB (GPU 0; 15.90 GiB total capacity; 11.59 GiB already allocated; 231.75 MiB free; 14.75 GiB reserved in total by PyTorch)
  0% 0/800000 [00:11<?, ?it/s]

How much memory does your config require? Do I need to decrease my batch size (or other settings) to be able to train on colab?

Training on 1024 resolution

Hello, thanks for your great work.

I tried to train alias-free gan with config-t on the resolution of 1024, by simply editing training.size to 1024 in the config file, but the network seems not converge and the generated result is not good either.

Do I need to modify some other arguments for training on a larger resolution (e.g. 512, 1024)?

Thanks!

Jinc function

Hi. I wonder why you use just the jinc(||x||), whereas in the original article they use jinc(2fc||x||).

Issues regarding the LMDB dataset creating.

Traceback (most recent call last):
File "train.py", line 404, in
conf = load_arg_config(GANConfig)
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/tensorfn/util/config.py", line 269, in load_arg_config
conf = load_config(config_model, args.conf, args.opts, show)
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/tensorfn/util/config.py", line 257, in load_config
conf = config_model(**read_config(config, overrides=overrides))
File "/home/user/anaconda3/envs/pytorch/lib/python3.8/site-packages/tensorfn/util/config.py", line 25, in read_config
json_str = _jsonnet.evaluate_file(config_file)
AttributeError: 'NoneType' object has no attribute 'evaluate_file'

data format.

-Data
--Images
---1.jpg
|
|
|
---N.jpg

About train 1024 img

Can anybody provide a setting to train 1024 img? The existing setting seems only support 256 img.

Training Speed

Hi!
The authors from the paper said that they needed to use a custom conda kernel to get acceptable performance (They say it's 20x faster than with pytorch primitives. I noticed u didn't use a custom cuda object. How is your performance compared to StyleGAN2?

Thanks!

filter_parameters not being set?

Hi, I'm trying to find out which cutoff frequencies you actually used for you experiments.

So, I tried to find where is the function 'filter_parameters' is being used, but I can't seem to find it.

Could you point me to where it is being set?

Thanks!

Question on Generating Video Samples

Hi!
First of all, thank you for sharing the implementation!

I was wondering about how you have generated the sample videos in the README file. Is there a code in the repo? In the project page from NVIDIA, they stated that the videos are generated by randomly walking around a central point in the latent space.

Thank you.

Internal Representations

Thanks for the great contribution.

Slightly off-topic, but do you know how they generate the internal representations in Figure 6 in the original paper? ie. what convention do they use to visualise the channels in the feature maps as RGB?

Tips for running in google colab & resuming from checkpoint

Thanks for this repo, it's great!

To get it working in colab, I copied the bare minimum out from the docker file:

!pip install jsonnet
!apt install -y -q ninja-build
!pip install tensorfn rich

!pip install setuptools
!pip install numpy scipy nltk lmdb cython pydantic pyhocon

!apt install libsm6 libxext6 libxrender1
!pip install opencv-python-headless

It then works despite throwing two compatibility errors:

ERROR: requests 2.23.0 has requirement urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1, but you'll have urllib3 1.26.6 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.

I then made some manual edits to config/config-t.jsonnet so it runs on colab:

Under training:{} set image size to 128
Under training:{} batch size to 12 (650mb each so <8gb I guess)

In prepare_data.py I commented out line 14 for no resizing, just cropping. Could be useful config for some datasets.

In train.py main function line 322 and comment out 5 "logger" lines. the logger info didn't work, it just hangs then falls over without error out of the box in colab but I didn't investigate further.

I also couldn't get --ckpt=checkpoint/010000.pt to resume properly. I tried editing start iteration in the config too but no luck, it just seemed to start from zero again.

Also, it may be worth editing train.py with autocast() for half precision float16 instead of float32 to improve speed and memory limitations? Or even porting to TPU? https://github.com/pytorch/xla

So then run

!git clone https://github.com/rosinality/alias-free-gan-pytorch.git

After making these edits

#upload your zip file or use google drive import
!unzip /content/dataraw.zip -d /content/dataraw

%cd /content/alias-free-gan-pytorch
!python prepare_data.py --out /content/dataset --n_worker 8 --size=128 /content/dataraw

%cd /content/alias-free-gan-pytorch
!python train.py --n_gpu 1 --conf config/config-t.jsonnet path=/content/dataset/

Thanks again!

about input fourier features.

Hi, thanks for the work. I have a question regarding the input fourier features.
I think you might have included the margin into the target canvas (-0.5~0.5) ?
That makes the input frequencies become relatively lower.

class FourierFeature(nn.Module):
def __init__(self, size, dim, cutoff, eps=1e-8):
super().__init__()
coords = torch.linspace(-0.5, 0.5, size + 1)[:-1]
freqs = torch.linspace(0, cutoff, dim // 4)

A possible fix would be something like:

class FourierFeature(nn.Module):
    def __init__(self, size=16, margin=10, dim=512, cutoff=2, eps=1e-8):
        """
        size:   sampling rate (or feature map size)
        margin: expanded feature map margin size
        dim:    # channels
        cutoff: cutoff fc
        """
        super().__init__()

        normalized_margin = margin / size
        # -0.5-m ~ 0.5+m, uniform interplate 'size' (except the last one)
        # note the margin here, target canvas was -0.5~0.5, extended canvas should be larger
        coords = torch.linspace(- 0.5 - normalized_margin, 
                                  0.5 + normalized_margin, 
                                size + 2 * margin + 1)[:-1]

How much GPU memory do I need to train?

Hi! I wanted to try this GAN, but I don't have enough memory to run it. I have a 1080ti 11gb and used training.batch=1
Is there any way to optimize the network so that it fit?

Train script issue

I´ve been trying to run the repo via colab. Everything goes fine until running the train script, then the error below is thrown. Has the repo been recently updated or am I missing some dependency or some other detail to be considered? Any clue how to solve this?

IndexError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pyparsing.py in _parseNoCache(self, instring, loc, doActions, callPreParse) 1682 try: -> 1683 loc, tokens = self.parseImpl(instring, preloc, doActions) 1684 except IndexError: 8 frames IndexError: string index out of range During handling of the above exception, another exception occurred: ParseException Traceback (most recent call last) ParseException: Expected {: ... | {{{"=" | ":" | "+="} - [Suppress:({{{"#" | "//"} - SkipTo:({Suppress:(W:( )) | StringEnd})} | Suppress:(W:( ))})]... -} ConcatenatedValueParser:([{{{{Suppress:({[Suppress:(W:( ,))] {"#" | "//"} - SkipTo:({Suppress:(W:( )) | StringEnd})}) | {Suppress:("include") {Re:('"(?:[^"\\\n]|\\.)"[ \t]') | {{"url" | "file" | "package"} - Suppress:("(") - Re:('"(?:[^"\\\n]|\\.)"[ \t]') - Suppress:(")")} | {"required" - Suppress:("(") - {Re:('"(?:[^"\\\n]|\\.)"[ \t]') | {{"url" | "file" | "package"} - Suppress:("(") - Re:('"(?:[^"\\\n]|\\.)"[ \t]') - Suppress:(")")}} - Suppress:(")")}}} | Re:('[ \t]\$\{[^\}]+\}[ \t]') | : ...} | Forward: {{{{Suppress:("[") -} ListParser:({{ConcatenatedValueParser:([{{{{Suppress:({[Suppress:(W:( ,))] {"#" | "//"} - SkipTo:({Suppress:(W:( )) | StringEnd})}) | {Suppress:("include") {Re:('"(?:[^"\\\n]|\\.)"[ \t]') | {{"url" | "file" | "package"} - Suppress:("(") - Re:('"(?:[^"\\\n]|\\.)"[ \t]') - Suppress:(")")} | {"required" - Suppress:("(") - {Re:('"(?:[^"\\\n]|\\.)"[ \t]') | {{"url" | "file" | "package"} - Suppress:("(") - Re:('"(?:[^"\\\n]|\\.)"[ \t]') - Suppress:(")")}} - Suppress:(")")}}} | Re:('[ \t]\$\{[^\}]+\}[ \t]') | : ...} | : ...} | {{{{{{{{W:(0123...) Suppress:([]...)} {"ns" ^ "nano" ^ "nanos" ^ "nanosecond" ^ "nanoseconds" ^ "us" ^ "micro" ^ "micros" ^ "microsecond" ^ "microseconds" ^ "ms" ^ "milli" ^ "millis" ^ "millisecond" ^ "milliseconds" ^ "s" ^ "second" ^ "seconds" ^ "m" ^ "minute" ^ "minutes" ^ "h" ^ "hour" ^ "hours" ^ "w" ^ "week" ^ "weeks" ^ "d" ^ "day" ^ "days" ^ "mo" ^ "month" ^ "months" ^ "y" ^ "year" ^ "years"}} Supp} | {{{{{{{{W:(0123...) Suppress:([]...)} {"ns" ^ "nano" ^ "nanos" ^ "nanosecond" ^ "nanoseconds" ^ "us" ^ "micro" ^ "micros" ^ "microsecond" ^ "microseconds" ^ "ms" ^ "milli" ^ "millis" ^ "millisecond" ^ "milliseconds" ^ "s" ^ "second" ^ "seconds" ^ "m" ^ "minute" ^ "minutes" ^ "h" ^ "hour" ^ "hours" ^ "w" ^ "week" ^ "weeks" ^ "d" ^ "day" ^ "days" ^ "mo" ^ "month" ^ "months" ^ "y" ^ "year" ^ "years"}} Suppress:(WordEnd)} | Re:('[+-]?(\d*\.\d+|\d+(\.\d+)?)([eE][+\-]?\d+)?(?=$|[ \t]([\$\}\],#\n\r]|//))')} | "true"} | "false"} | "null"} | {{Re:('""".?""""') | Re:('"(?:[^"\\\n]|\\.)"[ \t]*')} | Re:('(... ))})}]...)}}, found end of text (at char 43), (line:1, col:44)

Scale affine transformation [WIP]

I have been experimenting with how to add a couple of extra values for scaling the Fourier features. I still think my implementation is wrong because what should look like a smooth zoom-in looks more like if we were changing the FoV of the camera, so things in the center of the image scale at a slightly different rate. Some tweaks need to be done for this to work seamlessly, but the results are still pretty cool. With some refinement, would you consider adding this as a configurable parameter? (once i manage to make it work as I want, I could open a PR)

pe.mp4
pe.mp4
pe.mp4

Required GPU Memory

@rosinality thank you for always putting out such fantastic work. I have a questions about your training details: How much GPU memory is required, at minimum, to run this implementation of alias free gan? In the paper, the authors mention training on "n NVIDIA DGX-1 with 8 Tesla V100 GPUs", but no mention of how much GPU memory is required. Also, how long did it take you to train?

Augment set to true causes RuntimeError

When I set augment to true in the config file I get a RuntimeError on lines 66 and 49 of stylegan2/op/fused_act.py

RuntimeError: input must be contiguous

Unless I make the input contiguous.

gradgrad_out = fused.fused_bias_act( gradgrad_input.contiguous(), gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale )

out = fused.fused_bias_act(input.contiguous(), bias, empty, 3, 0, negative_slope, scale)

CUDA error: illegal memory access was encountered

I'm getting the following error in stylegan2/op/fused_act.py at line 66:

RuntimeError: CUDA error: an illegal memory access was encountered

Did not use the docker file and I'm running on torch 1.8.1 and I'm running on a 3090

Artifacts

Hi. I wonder what are those artifacts on the generated images?
064200
It seems like they appear on your title animation with faces, too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.