Code Monkey home page Code Monkey logo

fet-gan's Introduction

FET-GAN

This is the webpage of the paper:

Li W, He Y, Qi Y, Li Z, Tang Y. FET-GAN: Font Effect Transfer via K-shot Adaptive Instance Normalization[C //Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34.

It is provided for educational/research purpose only. Please consider citing our paper if you find it useful for your work.

Abstract

Text effect transfer aims at learning the mapping between text visual effects while maintaining the text content. While remarkably successful, existing methods have limited robustness in font transfer and weak generalization ability to unseen effects. To address these problems, we propose FET-GAN, a novel end-to-end framework to implement visual effects transfer with font variation among multiple text effects domains. Our model achieves remarkable results both on arbitrary effect transfer between texts and effect translation from text to graphic objects. By a few-shot fine-tuning strategy, FET-GAN can generalize the transfer of the pre-trained model to the new effect. Through extensive experimental validation and comparison, our model advances the state-of-the-art in the text effect transfer task. Besides, we have collected a font dataset including 100 fonts of more than 800 Chinese and English characters. Based on this dataset, we demonstrated the generalization ability of our model by the application that complements the font library automatically by few-shot samples. This application is significant in reducing the labor cost for the font designer.

Presentation Video (Youtube)

Experimental Results

Download

Paper

[coming soon]

Webpage

  • Google Drive Datasets

Code

  • Google Drive Datasets

Pre-trained models

  • Google Drive Datasets
  • Google Drive Datasets

Datasets

We collect a new dataset called Fonts-100 which includes 100 fonts each with 775 Chinese characters, 52 English letters, and 10 Arabic numerals. There are a total of 83,700 images, each of which is 320*320 in size. The following figure shows an overview of these fonts for the same Chinese character.

This Fonts-100 dataset is used to demonstrate the ability of our model to assist the font designer in complementing characters in font library. We take the first 80 fonts for training, and the remaining 20 as unseen for finetuning. You can get this dataset here:

  • Google Drive Datasets
  • Google Drive Datasets

We also provided the texteffects dataset in the above drive links. This dataset is proposed in TET-GAN. It is paired that each text effect image is provided with its font image. We separate 6 fonts and combine them with the original 64 effects. Finally, there are a total of 70 classes of text effects and divide them into train/finetune as the Fonts-100 dataset.

How to Use

Installation

  • Clone this repo:
git clone https://github.com/liweileev/FET-GAN
cd FET-GAN/codes/
  • Install dependencies
python -m pip install -r requirements.txt

Then you get the directory & file structure like this:

codes
└───configs
└───data
└───datasets
│   └───Fonts100
│   │   └───finetune
│   │   └───train
│   └───TextEffects
│   │   └───finetune
│   │   └───train
└───finetune_imgs
└───models
└───networks
└───outputs
│   └───Fonts100
│   │   └───checkpoints
│   │   │   │   30_net_D.pth
│   │   │   │   30_net_E.pth
│   │   │   │   30_net_G.pth
│   │   │   opt.txt
│   └───TextEffects
│   │   └───checkpoints
│   │   │   │   30_net_D.pth
│   │   │   │   30_net_E.pth
│   │   │   │   30_net_G.pth
│   │   │   opt.txt
│   requirements.txt
│   test.py
└───testimgs
│   train.py
└───utils

Quick Testing

  • Test the model:
python test.py

The default test is for TextEffects dataset. You can change the setting in test.py

  • View the results:

The test results will be saved to a html file here: ./testresults/TextEffects/TextEffects/test_latest/index.html.

The results for TextEffects dataset is like this:

The reulsts for Fonts100 dataset is like this:

  • single image mode / images directory mode

You can switch the test mode by setting the opt['testsource']/opt['testsource_dir'] & opt['testrefs']/opt['testrefs_dir'] to None in test.py. And then re-run script.

The number of reference images also can be changed by setting opt['K'] and the default is 8.

Train

  • Train the model:
python train.py

The default train is for TextEffects dataset. You can change opt['fonteffects_dir'] in train.py All other settings are here: configs/font_effects.yaml

  • view training results and loss plots

We use Visdom for visualization as default, and the defalut URL is http://localhost:8097.

You can setting the display_id less than 1 if you don't want to use Visdom.

To see more intermediate results, check out ./outputs/[train name]/web/index.html. You can get the page like this:

The default training epoch is 30 and we save the nets every epoch here: ./outputs/[train name]/checkpoints/

fet-gan's People

Contributors

liweileev avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.