Code Monkey home page Code Monkey logo

fast-neural-style's Introduction

fast-neural-style

July 2018 Update:

  1. Upgrade to PyTorch 0.4.0.
  2. Minor Refactor.
  3. Allow assigning an independent style weight for each layer.
  4. Provide a Dockerfile

The code to the first blog post about this project can be found in tag 201707.

Stylize Script Usage Example

python stylize.py models/model_rain_princess_cropped.pth content_images/pic.jpg pic-512.jpg --resize=512

Old README

This personal fun project is heavily based on abhiskk/fast-neural-style with some changes and a video generation notebook/script:

  1. Use the official pre-trained VGG model
  2. Output intermediate results during training
  3. Add Total Variation Regularization as described in the paper

The model uses the method described in Perceptual Losses for Real-Time Style Transfer and Super-Resolution along with Instance Normalization.

fast-neural-style's People

Contributors

ceshine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fast-neural-style's Issues

password

when I open the code with jupyter notebook , it needs password?

Why not divide the size ?

In the original paper and the most stared TF implementation, the content loss and style loss should divide the feature map size (C,H,W).

Why don't you do this?

Computation issue

Hi,
Thanks for sharing code! I have a question regarding computation. It seems that the computation of perceptual loss is much heavier than MSE loss. In my experiment, using perceptual loss will make training much slower and batch size much smaller. Is it normal?

cannot locate the coco2017 dataset correctly

I've point the dataset to 'K:\downloads\coco2017\train2017'
and there is about 18GB img under this floder

however there is still reported
"RuntimeError: Found 0 images in subfolders of: K:\downloads\coco2017\train2017
Supported image extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm"

after running this command
"train_dataset = datasets.ImageFolder(DATASET, transform)"

so is there any solution to this problem?

error of transformer network


RuntimeError Traceback (most recent call last)
in ()
20 x = x.cuda()
21
---> 22 y = transformer(x)
23 xc = Variable(x.data, volatile=True)
24

/home/tai/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in call(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)

/home/tai/cvpr18/fast-neural-style/transformer_net.py in forward(self, X)
35 def forward(self, X):
36 in_X = X
---> 37 y = self.relu(self.in1(self.conv1(in_X)))
38 y = self.relu(self.in2(self.conv2(y)))
39 y = self.relu(self.in3(self.conv3(y)))

/home/tai/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in call(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)

/home/tai/cvpr18/fast-neural-style/transformer_net.py in forward(self, x)
131 t = x.view(x.size(0), x.size(1), n)
132 print (t.size())
--> 133 mean = torch.mean(t, 2).unsqueeze(2).expand_as(x)
134 # Calculate the biased var. torch.var returns unbiased var
135 var = torch.var(t, 2).unsqueeze(2).expand_as(x) * ((n - 1) / float(n))

/home/tai/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.pyc in expand_as(self, tensor)
723
724 def expand_as(self, tensor):
--> 725 return Expand.apply(self, (tensor.size(),))
726
727 def t(self):

/home/tai/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in forward(ctx, i, new_size)
109 # tuple containing torch.Size or a list
110 def forward(ctx, i, new_size):
--> 111 result = i.expand(*new_size)
112 ctx.num_unsqueezed = result.dim() - i.dim()
113 ctx.expanded_dims = [dim for dim, (expanded, original)

RuntimeError: The expanded size of the tensor (256) must match the existing size (32) at non-singleton dimension 2. at /opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/generic/THCTensor.c:323

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.