Comments (3)
The difference is between AdaIN and IN is that the affine parameters in AdaIN are data-adaptive (different test data have different affine parameters), while the affine parameters are fixed (fixed after fitting the training data). In the case of MUNIT, the affine parameters of AdaIN is coming from the style code via the decoding operation of MLP.
from munit.
The difference is between AdaIN and IN is that the affine parameters in AdaIN are data-adaptive (different test data have different affine parameters), while the affine parameters are fixed (fixed after fitting the training data). In the case of MUNIT, the affine parameters of AdaIN is coming from the style code via the decoding operation of MLP.
The MLP need train?
from munit.
Thank you very much. I am also puzzled about the learned affine parameters. Is it possible to use the AdaInGen proposed in MUNIT to replace the network in "arbitrarry style transfer in real-time" , namely, the content image is fed into the content encoder , the style image is fed into the style encoder to learn the mean and var, then through the decoder the stylized image is synthesised with the content of content image and style of the style image. Dose it still achieve the arbitrary style transfer?
from munit.
Related Issues (20)
- How to implement MUNIT with K Fold Cross Validation?
- Question about Multi-GPU training on single machine HOT 4
- Can you use non-standardised dataset for training?
- Checkpoint images examples?
- Can i do train grayscale-imageset?
- Pytorch version >=1.0 How to load VGG16 pretrain in VGG16.T7? HOT 1
- Experiment of using Instance Normalization vs Layer Normalization on Decoder HOT 8
- AdaptiveInstanceNorm2d and the LayerNorm misunderstand
- May I ask if everyone will prompt "Warning: NaN or Inf found in input tensor." during runtime? HOT 1
- Questions about batch_size and GPU memory usage
- loss Nan
- When are you planning to make it public?
- After F(x)+x, the ResBlock seems not follow by a non-linearity activation HOT 1
- Ideas to speed up training phase
- Missing max pooling layer in the Vgg16 network structure
- outputs = (outputs + 1) / 2. in test code HOT 1
- inception checkpoint? HOT 3
- Ŕ
- summer2winter_yosemite checkpoint?
- Pretrained Inception Network HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from munit.