Code Monkey home page Code Monkey logo

animegan's Introduction

AnimeGAN

AnimeGANv2, the improved version of AnimeGAN.
A Tensorflow implementation of AnimeGAN for fast photo animation !     日本語
The paper can be accessed here or on the website.

Update:

  1. data_mean.py is used to calculate the three-channel(BGR) color difference of the entire style data, and these difference values are used to balance the effect of the tone of the style data on the generated image during the training process. For example, the overall tone of the Hayao style data is yellowish.
  2. adjust_brightness.py is used to adjust the brightness of the generated image, which is based on the brightness of the input photo.
  3. Some training hyperparameter changes.

Online access: Be grateful to @TonyLianLong for developing an online access project, you can implement photo animation through a browser without installing anything, click here to have a try.

Good news: tensorflow-1.15.0 is compatible with the code of this repository. In this version, you can run this code without any modification. The premise is that the CUDA and cudnn corresponding to the tf version are correctly installed. Maybe the versions between tf-1.8.0 and tf-1.15.0 are also supported and compatible with this repository, but I didn’t make too many extra attempts.


Some suggestions:

  1. since the real photos in the training set are all landscape photos, if you want to stylize the photos with people as the main body, you may as well add at least 3000 photos of people in the training set and retrain to obtain a new model.
  2. In order to obtain a better face animation effect, when using 2 images as data pairs for training, it is suggested that the faces in the photos and the faces in the anime style data should be consistent in terms of gender as much as possible.
  3. The generated stylized images will be affected by the overall brightness and tone of the style data, so try not to select the anime images of night as the style data, and it is necessary to make an exposure compensation for the overall style data to promote the consistency of brightness and darkness of the entire style data.

News:
      AnimeGANv2 has been released and can be accessed here.

The improvement directions of AnimeGANv2 mainly include the following 4 points:  

   1. Solve the problem of high-frequency artifacts in the generated image.
   2. It is easy to train and directly achieve the effects in the paper.
   3. Further reduce the number of parameters of the generator network.
   4. Use new high-quality style data, which come from BD movies as much as possible.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19 or Pretrained model

vgg19.npy

Pretrained model

2. Download dataset

Link

3. Do edge_smooth

eg. python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

eg. python data_mean.py --dataset Hayao

5. Train

eg. python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 1

6. Extract the weights of the generator

eg. python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_1_10 --style_name Hayao

7. Test

eg. python main.py --phase test --dataset Hayao
or python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/real --style_name H

8. Convert video to anime

eg. python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir ../checkpoint/generator_Hayao_weight


Results

😊 pictures from the paper - AnimeGAN: a novel lightweight GAN for photo animation




😍 Photo to Hayao Style












License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGAN given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen, Gang Liu, Jie Chen

Acknowledgment

This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.

animegan's People

Contributors

tachibanayoshino avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.