ggghsl / infoswap-master Goto Github PK
View Code? Open in Web Editor NEWOfficial PyTorch Implementation for InfoSwap
License: Other
Official PyTorch Implementation for InfoSwap
License: Other
you can upload the pretrained models to google drive.
i can't access baidu outside of china.
could you tell me training method of pretrained face recognition model? (train dataset , loss etc..)
Hello, I have the following problem now. Could you please provide me with this checkpoint? Thank you
FileNotFoundError:No such file or directory: './checkpoints_512/w_kernel_smooth\ckpt_ks_G.pth'
iib.py里面 191行,标准差为什么取的是均值呢?不太懂
m_s = torch.mean(Rs, dim=0) # [C, H, W]
std_s = torch.mean(Rs, dim=0)
Rs_params.append([m_s, std_s])
eps_s = torch.randn(size=Rt.shape).to(Rt.device) * std_s + m_s
feat_t = Rt * (1. - lambda_t) + lambda_t * eps_s
Xt_feats.append(feat_t) # only related with lambda
m_t = torch.mean(Rt, dim=0) # [C, H, W]
std_t = torch.mean(Rt, dim=0)
Rt_params.append([m_t, std_t])
eps_t = torch.randn(size=Rs.shape).to(Rs.device) * std_t + m_t
feat_s = Rs * (1. - lambda_s) + lambda_s * eps_t
Xs_feats.append(feat_s) # only related with lambda
Example in preprocessing is invalid, namely test_on_images.ipynb, could you please send it again
In inference_demo.py, R = encoder.features[i]; while in the forward function of IIB, R = readout_feats[i].
Which should I use for training?
And when I use R = readout_feats[i], Info is really large (5668875) at the beginning of training.
It would be awesome to test it out on Google Colab.
Thank you, sir.
Hi, thanks for your great work!
For the quantitative results in your work,I have questions about the correspondence of pair frames from source and target videos respectively.
(1) Did you randomly select 10 frames from each video or get the same pairs as FaceShifter?
(2) Could you provide the source-to-target pairs numbers for further fair comparison?
Thanks!
Do you have a plan to release the training code?
hi. I'm implementing training code of this paper. Can you tell me about details of the training environment? (batch size, gpu ...)
According to the pseudo code "Algorithm 1" in the supp.pdf file, there are 2 pretrained encoders, 3 decoders and 2 AII generators.
For the encoders and decoders, if I only use 1 module and use it several times during the training, I have this error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
It's OK to copy a encoder as it's pretrained and I do not need to optimize it during training. But for the decoder, I think its not proper to copy it 3 times, as according to my understanding of the algorithm, the three parts should use decoder with the same parameter.
For the AII generator, I think I just need to have two generators with the same structure, just as the cycle gan has two generators. And I can use the Lcyc in line 41 to optimize these two generators together. Is this understanding right?
In your experiment setup, you described alpha : beta = 1: 5 for IIB, is that means alpha = 1, beta = 5?
nice work!
Is it convenient to provide the pretrained face recognition model file model_128_ir_se50.pth?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.