Code Monkey home page Code Monkey logo

beautygan_pytorch's Introduction

Hi there, I'm Wentao Jiang 👋

I am a fourth-year PhD student at the School of Computer Science and Engineering, Beihang University (BUAA), supervised by Prof. Si Liu.

I will graduate in 2024. I am actively looking for a job as a computer vision researcher. Email me if you are interested!

beautygan_pytorch's People

Contributors

wtjiang98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

beautygan_pytorch's Issues

makeup loss

Hi, thanks for your implement work, but i doubt wheater the implement of makeup loss is correct, the paper says that we should calculate the histogram matching between src image and ref image firstly, then calculate the l2 norm between fake image which generated by generator and matched image, i
find that you just calculate makeuo loss between the generated image and reference image which dont normlization the value to [-1, 1], and use L1 norm, could you help me? thanks!

How to run the code

Hello, I am interested in your work. I have cloned your code and downloaded the BeautyGAN dataset(and move it to the directory of your code). But when I run 'test.py' / 'train.py', it shows the error--No such file or directory: './data/images/train_SYMIX.txt'. I wonder what the .txt means, and I really hope that you can give some guide to run the code.

ps: the dataset is like: makeup_dataset-->all-->images丶segs-->makeup丶non-makeup. the pics in "segs" are all just black.

MT-dataset

您好!想请问一下MT-dataset数据集里面的segs里面的mask的分割代码开源了吗?

pretrained vgg weight

Could you upload the pre-trained vgg weight or let me know how to get the weight?

A Guide To Run The Code

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7

  2. You can just download the VGG model from the Pytorch model zoo

import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

About update the code.

Hi, thanks for your share of your amazing work!

Do you have any plan to update the project code in November?

Confused in dataloader

Hello,I'm very interested in this. But I ran the code, there are so many options. And I aslo download the database.But I am not sure which args means non-make up picture,please help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.