After many hours, I finally can run the code 2333. Here are some tips to run the code:
- the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!
-
You can organize the dataset like this:
Then you should change the paths in makeup.py. For example:
-
You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models
#self.vgg = net.VGG()
#self.vgg.load_state_dict(torch.load('vgg_conv.pth'))
self.vgg=models.vgg16(pretrained=True)
And then, write a forward function on your own to seize the 4th conv layer:
#you can print the vgg16 model and find that the 4th layer conv's id is 17.
def vgg_forward(self,model,x):
for i in range(18):
x=model.features[i](x)
return x
Finally:
vgg_org=self.vgg_forward(self.vgg,org_A)
vgg_org = Variable(vgg_org.data).detach()
vgg_fake_A=self.vgg_forward(self.vgg,fake_A)
g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg
......
(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!