bluer555 / cr-gan Goto Github PK
View Code? Open in Web Editor NEWYu Tian et al. "CR-GAN: Learning Complete Representations for Multi-view Generation", IJCAI 2018
Yu Tian et al. "CR-GAN: Learning Complete Representations for Multi-view Generation", IJCAI 2018
I am just running the train.py code to train the model on the dataset you provided. But still spending three hours there is no change in status, its showing just total number of images. can you tell me how long it will take.
I test the evaluate.py using my pictures already aligned by MTCNN network,but the results seems not so good and weird. Anyone else has the same problem?
您好,感谢您的代码,可以把pre-trained model 上传到百度网盘吗?google无法访问
The output picture here is to put all the pictures together. Can you separate the results and output a single picture?
Hello, I see that the VIGAN (ICCV 2019) references the CRGAN, and gives the L1 error and the SSIM. I use the pretrained model test L1 error and SSIM, they are worse than the results given by VIGAN paper. Do you know why and have you tested these?
In addition, I use the released code to start the training from the initialization, and there is a gap between the papers. Do you have a trick when training?
Hello, experts, nice work about this architecture, training algorithm and the idea of self-supervised learning.
I think z is a normal distribution but z_bar which from encoder maybe not
so when we train encoder, we got different distribution from dataset and z, but your algorithm doesn't monitor these relation between dataset and z
same as title I would like to ask, VAE-GAN can control these two distribution I think.
Do you have any idea about this or some improved?
Hi~
First thanks for sharing your cropped Multi-PIE datasets, which helps me a lot. But it's a pity that there are too few expressions for my expression recovery project. Could you please send me the images of the remaining expressions ( just the images from angles 050, 051 and 140 with 07 illumination of each expression)?
I know it's a lot to ask, but I've been scouring the Internet for datasets, and Multi-PIE is the only one that fits my project. I can also pay a fee to thanks for your generous sharing.
Here is my email: [email protected]
I'd be grateful if you would contact with me.
Thanks for your codes! According to the function get_300w_LP in data_loader.py,there should be some 'txt' files which containing the label for 300w_LP dataset. But there is no such file in your cropped version. Should I get the label from the original 300w-LP dataset webpage??
Looking forward to your reply.
what issues can lead to this error which is "RuntimeError: CUDA error:out of memory"? I want to get this answer.Thank you!
In your paper, you have mentioned you has also implemented a wgan variant for drgan. Do you just remove the batch norm in the decoder of drgan? Any other modification?
How do you crop the face?
I use two datasets to train includes crop_0907 and multi_PIE_crop_128, Before I input python train.py
according the Readme. Meanwhile I modify somewhere in the train.py as fellow:
First: parser.add_argument('--modelf', default='./model', help='folder to output images and model checkpoints')(I push the pretraining model in this dir );
Second: I modify the path and name of model
Iload_model(G_xvz, args.modelf, 'netG_xvz.pth')
load_model(G_vzx, args.modelf, 'netG_vzx.pth')
load_model(D_xvs, args.modelf, 'netD_xvs.pth')
Environment:
Linux 16.04;
python 2.7.12;
pytorch 0.3.1;
After I input python train.py
. The terminal show load relative parameter and not go on. The picture is here:
And I want to know it's correct? I don't clearly know how to train the model in CR-GAN. Can you help me ? Thanks.
Thank you for your codes! Can you please also share pretrained model?
Hi, Thank you for sharing your croped dataset. Could you please tell me how you crop it from the original Multi-PIE Dataset? And if possible, can you share the rest croped dataset of Multi-PIE Dataset? I am looking forward to your reply.
First thanks for sharing your cropped Multi-PIE datasets, which helps me a lot. But it's a pity that there are too few expressions for my expression recovery project. Could you please send me the images of the remaining expressions ( just the images from angles 050, 051 and 140 with 07 illumination of each expression)?
Same as last question, I know it's a lot to ask, but I've been scouring the Internet for datasets, and Multi-PIE is the only one that fits my project. I can also pay a fee to thanks for your generous sharing.
Here is my email: [email protected]
I'd be grateful if you would contact with me.
您好,
我的运行环境是cuda8 + pytorch 0.4.1,如果按照你的测试demo中的例子来跑则出现cuda版本不满足runtime版本的要求,由于我的环境还有tensorflow1.4因此cuda不可升级。
因此我尝试使用cpu来跑你的demo,我修改了--cuda中默认位False,并在load_pretrained_model()中修改了torch.load()中的参数:
state_dict = torch.load('%s/%s' % (path,name), map_location=lambda storage, loc: storage)
然而这样的话则输出一系列not load:
not load weights module.conv.weight
not load weights module.conv.bias
not load weights module.resBlock0.conv_shortcut.conv.weight
not load weights module.resBlock0.conv_shortcut.conv.bias
not load weights module.resBlock0.conv1.weight
……
还是这个函数中,own_state[name].copy_(param)的作用是什么呢?复制state_dict中的param到own_state中吗?然后own_state这个字典的作用是什么呢?
最后在evalueate文件夹中生成的图片则是只有第一张原图其他置灰的图像,这肯定是不正确的对吧?
Hi, I want to download the cropped dataset from https://drive.google.com/open?id=1DD6AO9Y5rAgiiW7IJY2kBxI_bCcfhYo4,but this link doesn't work. Can you send the cropped 300w-LP dataset to my email [email protected] or send me a valid download link? Thanks a lot!
Thank you for @bluer555 provice this code of your paper for me. Three issues in the code of your paper that confuse me. So, I want to ask you for help.
(1) In the code:in the train.py ,you init the variable tmp with random.uniform(0,1) and tmp decide to the reconstruction, after tmp = torch.LongTensor(idexs), the content of tmp is about the angle of image. So I want to know what the tmp represent and the reconstruction is random?
(2) In your paper and code: In the 4.1 section of your paper, It shows that we train 10 more epochs in self-supervised learning. But I don't know where the epoch of self-supervised learning in your code except the 25 epochs in supervised learning?
(3) In the dataset, the dataset named crop0907 resize image of 300W_LP, and I want to know how to genera the txt of 300w_LP_size_128?
Thanks for your help! @bluer555
I need the pose label of the cropped Multi-Pie dataset you used.
Do you have the pose label??
I would be very appreciate if you could share the pose label.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.