Code Monkey home page Code Monkey logo

heatmaps's People

Contributors

gabrieldemarmiesse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

heatmaps's Issues

can it be used to grey-level imageset network?

my network is designed for grey level pictures, so there is only 1 channel.
I tried to apply this lib to my model, at first, it report channel not match, and I find the code , changed channel num to 1 ,

if K.image_data_format() == 'channels_first':
img_input = Input(shape=(1, None, None))
else:
img_input = Input(shape=(None, None, 1))

run it again, now it report another problem:
I can't understand, could you please help ?

Traceback (most recent call last):
File "seam_heatmap.py", line 93, in
new_model = to_heatmap(model)
File "D:\DeepLearning\Python3.5.2\lib\site-packages\heatmaps\heatmap\heatmap.py", line 258, in to_heatmap
x = add_reshaped_layer(model.layers[index + 2], x, size, atrous_rate=atrous_rate)
File "D:\DeepLearning\Python3.5.2\lib\site-packages\heatmaps\heatmap\heatmap.py", line 211, in add_reshaped_layer
insert_weights(layer, new_layer)
File "D:\DeepLearning\Python3.5.2\lib\site-packages\heatmaps\heatmap\heatmap.py", line 156, in insert_weights
new_W = W.reshape((ax1, ax2, previous_filter, n_filter))
ValueError: cannot reshape array of size 184320 into shape (4,4,64,120)

Usign the code on your own model

Hi,
As I was looking through your code, there is no place where I see you loading a per-trained model and most oddly, no place where you loaded an image to create the heat-map. I am getting confused here. The model which I trained has nothing to do with carts or dog, can i still use your code to generate the heat-map. The code I used to train my model is below and it is using tensorflow:

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K


# dimensions of images.
img_width, img_height = 150, 150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2946
nb_validation_samples = 990
epochs = 50
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# this is the augmentation configuration I use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration I use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model.save('first_try_model.h5')
model.save_weights('first_try.h5')
model_json=model.to_json()
with open("model.json","w") as json_file:
    json_file.write(model_json)

Any help is most appreciated.

sigmoid outputs?

Hey, this looks nice! Is there any chance I can get a heatmap for 1 (and more) sigmoid outputs?

why must you replace softmax by softmax4D

You replace softmax with softmax4D (mobilenet) but why? What is the difference and how can I still get the correct prediction from this layer?
The softmax4D Change changes the heatmaps for the first two classes to the heatmap of the 3rd (final) class made using the normal softmax layer and the one of the final layer (that is also the prediction in this example) to something less logical

MemoryError on Raspberry Pi

Hi, so I was playing around with raspberry pi and tried using Keras to see how it handle which by the was not so well but it worked. When I tried to run the heatmap demo however, I always get a memory error even after reducing the size of the output image which is probably not what causes the crash to start with. I looked it up on-line and the solution seem to related to batch size, something which you would be able to refer me to where in the package I might find. The error I receive is found bellow, the same error appears with python2:

pi@raspberrypi:~/heatmaps/examples $ python3 demo.py 
/usr/lib/python3/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using Theano backend.
Traceback (most recent call last):
  File "demo.py", line 38, in <module>
    model = VGG16()
  File "/home/pi/.local/lib/python3.5/site-packages/keras/applications/vgg16.py", line 146, in VGG16
    x = Dense(4096, activation='relu', name='fc1')(x)
  File "/home/pi/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 590, in __call__
    self.build(input_shapes[0])
  File "/home/pi/.local/lib/python3.5/site-packages/keras/layers/core.py", line 842, in build
    constraint=self.kernel_constraint)
  File "/home/pi/.local/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/pi/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 414, in add_weight
    constraint=constraint)
  File "/home/pi/.local/lib/python3.5/site-packages/keras/backend/theano_backend.py", line 154, in variable
    strict=False)
  File "/usr/local/lib/python3.5/dist-packages/theano/compile/sharedvalue.py", line 268, in shared
    allow_downcast=allow_downcast, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/theano/tensor/sharedvar.py", line 54, in tensor_constructor
    value=np.array(value, copy=(not borrow)),
MemoryError: you might consider using 'theano.shared(..., borrow=True)'

Update:

The issue is exclusive to vgg16, any call to vgg16 by any code causes the problem. I will close the issue after your reply as it does not seem to be an issue related to your package.

I do however have an issue as things work fine vgg16 but not much so with ResNet50 as I always receive the following error even on my usual machine.


Using Theano backend.
Model type detected: local pooling - flatten
Model cut at layer: 173
Pool size infered: 1
Traceback (most recent call last):
  File "demo.py", line 39, in <module>
    new_model = to_heatmap(model)
  File "/home/pi/heatmaps/heatmap/heatmap.py", line 259, in to_heatmap
    x = copy_last_layers(model, index + 3, x)
  File "/home/pi/heatmaps/heatmap/heatmap.py", line 182, in copy_last_layers
    if last_activation == 'softmax':
UnboundLocalError: local variable 'last_activation' referenced before assignment

It also does not work with InceptionV3 as it yield the following error.

Using Theano backend.
Model type detected: global pooling
Model cut at layer: 310
Traceback (most recent call last):
  File "demo.py", line 40, in <module>
    new_model = to_heatmap(model)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 239, in to_heatmap
    x = copy_last_layers(model, index + 1, x)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 169, in copy_last_layers
    x = add_reshaped_layer(layer, x, 1, no_activation=True)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 209, in add_reshaped_layer
    x = new_layer(x)
  File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 573, in __call__
    self.assert_input_compatibility(inputs)
  File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 472, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer predictions: expected ndim=4, found ndim=2

Using DenseNet121 and Theano:


Using Theano backend.
Model type detected: global pooling
Model cut at layer: 425
Traceback (most recent call last):
  File "demo.py", line 40, in <module>
    new_model = to_heatmap(model)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 239, in to_heatmap
    x = copy_last_layers(model, index + 1, x)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 169, in copy_last_layers
    x = add_reshaped_layer(layer, x, 1, no_activation=True)
  File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 209, in add_reshaped_layer
    x = new_layer(x)
  File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 573, in __call__
    self.assert_input_compatibility(inputs)
  File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 472, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer fc1000: expected ndim=4, found ndim=2

Another Update:

So I test some other models using TesorFlow backend and I basically recive the same error which as follow, note that vgg16 model run with no issue.


Model type detected: global pooling
Traceback (most recent call last):
  File "demo.py", line 39, in <module>
    new_model = to_heatmap(model)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/heatmaps/heatmap/heatmap.py", line 234, in to_heatmap
    x = middle_model(img_input)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
    output = self.call(inputs, **kwargs)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 2078, in call
    output_tensors, _, _ = self.run_internal_graph(inputs, masks)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 2229, in run_internal_graph
    output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/layers/normalization.py", line 185, in call
    self.momentum),
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1001, in moving_average_update
    x, value, momentum, zero_debias=True)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
    update_delta = _zero_debias(variable, value, decay)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 180, in _zero_debias
    "biased", initializer=biased_initializer, trainable=False)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable
    validate_shape=validate_shape, use_resource=use_resource)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
    use_resource=use_resource)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 664, in _get_single_variable
    name, "".join(traceback.format_list(tb))))
ValueError: Variable block1_conv1_bn/moving_mean/biased already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1001, in moving_average_update
    x, value, momentum, zero_debias=True)
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/layers/normalization.py", line 185, in call
    self.momentum),
  File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
    output = self.call(inputs, **kwargs)

I would not wanna bother you with all these models but it would be great i I could at least get it to work with either MobileNet or DenseNet121 as they are light models which is what I need for Raspberry Pi. You should also note how errors are different based on the backend I am using, same error using TensorFlow while different errors based on the model used with Theano.

Thanks

regarding re-train the VGG19 model

Hi,

Excuse me, I have to open a new thread regarding using your code posted in this thread. Thanks.

Right now, I have 200 pairs of images for training. Each training image is of black and white, i.e., one channel only. The mask image for training pair is also of size 256*256, each of its pixel corresponds to one pixel in the original image.

In the code of setting the training data set,

X = np.random.uniform(0,256, (2000, 3, 256, 256))
X = preprocess_input(X) # VGG preprocessing
Y = np.around(np.random.uniform(0,1, (2000, 1, 64, 64)))

I have three questions,

  1. does VGG19 model accept input with one channel only?
  2. In the code, Y was setup of having size of 64*64 due to two pooling layer. However, the mask image is of size 256*256. How can I map these 256*256 mask image to 64*64?
  3. Here you specify to freeze the first two blocks of the VGG19 by
for i in range(7):
    model.layers[i].trainable = False

Just curious, why we have to freeze the first two blocks instead of the first block? Are there any specific reason underlying this? Or generally if we only want to re-tune the model weights, it is a normal practice to change the top layer only. Is this understanding right?

Thank you very much.

AttributeError: 'Model' object has no attribute 'name'

Hi,

I tried your code but it didn't worked:

Model type detected: local pooling - flatten
Traceback (most recent call last):
  File "D:/Uni/Neuro/AttemptTwo/code/display_a_heatmap.py", line 45, in <module>
    new_model = to_heatmap(my_custom_model)
  File "d:\uni\neuro\attempttwo\venv\heatmaps-master\heatmaps-master\heatmap\heatmap.py", line 233, in to_heatmap
    middle_model = Model(inputs=model.layers[1].input, outputs=model.layers[index - 1].output)
  File "D:\Uni\Neuro\AttemptTwo\venv\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "D:\Uni\Neuro\AttemptTwo\venv\lib\site-packages\keras\engine\network.py", line 91, in __init__
    self._init_graph_network(*args, **kwargs)
  File "D:\Uni\Neuro\AttemptTwo\venv\lib\site-packages\keras\engine\network.py", line 183, in _init_graph_network
    'The tensor that caused the issue was: ' +
AttributeError: 'Model' object has no attribute 'name'

my model

def create_model():


    model = Sequential()
    model.add(Convolution2D(96, 11, 11,
                            border_mode='valid', subsample=(4, 4),
                            input_shape=(image_size, image_size, 3) ) )
    model.add(Activation('relu'))
    model.add(MaxPooling2D((3, 3), strides=(2, 2)))

    model.add(Convolution2D(128, 5, 5, border_mode='same'))
    model.add(Activation('relu'))
    model.add(MaxPooling2D((3, 3), strides=(2, 2)))

    model.add(Convolution2D(150, 3, 3, border_mode='valid'))
    model.add(Activation('relu'))


    model.add(Flatten())
    model.add(Dense(2048))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))

    model.add(Dense(1024))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))

    model.add(Dense(2))
    model.add(Activation('linear'))

    model.compile(loss='mean_squared_error', optimizer=Adadelta())
    return model

my code

import matplotlib.pyplot as plt
import numpy as np
from keras.applications.imagenet_utils import preprocess_input
from keras.models import load_model
from keras.preprocessing import image
from keras import backend as K

from heatmap import to_heatmap, synset_to_dfs_ids


def display_heatmap(new_model, img_path, ids, preprocessing=None):
    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(277, 277))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    if preprocessing is not None:
        x = preprocess_input(x)

    out = new_model.predict(x)

    heatmap = out[0]  # Removing batch axis.

    if K.image_data_format() == 'channels_first':
        heatmap = heatmap[ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=0)
    else:
        heatmap = heatmap[:, :, ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=2)

    plt.imshow(heatmap, interpolation="none")
    plt.show()


# model = VGG16()
# new_model = to_heatmap(model)
#
# s = "n02084071"  # Imagenet code for "dog"
# ids = synset_to_dfs_ids(s)
# display_heatmap(new_model, "./dog.jpg", ids, preprocess_input)

my_custom_model = load_model('saved_instance.hdf5')
new_model = to_heatmap(my_custom_model)
idx = 0  # The index of the class you care about, here the first one.
display_heatmap(new_model, "./input images/_MG_2881.jpg", idx)

regarding using a dataset of different domain

Hi,

Thanks for sharing the heatmap code.

If the studied data set is totally of different domain with the one that is used to train the VGG model, can I still use pipeline of your code? Do I have to re-train the model? If that's the case, what's the correct procedure?

Thanks a lot for your suggestions.

Reta

Unexpected Type Error while using a custom model

  File "demo.py", line 63, in <module>
    new_model = to_heatmap(loaded_model)
  File "/home/hashed/PoshaQ/testing/heatmap/heatmaps/heatmap/heatmap.py", line 271, in to_heatmap
    last_activation=last_activation)
  File "/home/hashed/PoshaQ/testing/heatmap/heatmaps/heatmap/heatmap.py", line 185, in copy_last_layers
    x = add_to_model(x, layer)
  File "/home/hashed/PoshaQ/testing/heatmap/heatmaps/heatmap/heatmap.py", line 107, in add_to_model
    x = new_layer(x)
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/engine/base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/layers/advanced_activations.py", line 119, in build
    constraint=self.alpha_constraint)
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/engine/base_layer.py", line 249, in add_weight
    weight = K.variable(initializer(shape),
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/initializers.py", line 38, in __call__
    return K.constant(0, shape=shape, dtype=dtype)
  File "/home/hashed/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 428, in constant
    return tf.constant(value, dtype=dtype, shape=shape, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 214, in constant
    value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 429, in make_tensor_proto
    if shape is not None and np.prod(shape, dtype=np.int64) == 0:
  File "/home/hashed/.local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2585, in prod
    initial=initial)
  File "/home/hashed/.local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 83, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
TypeError: long() argument must be a string or a number, not 'NoneType' 

Unable to debug where the issue is. I've used my own model. there is some issue in the add_to_model function

Issues with target size

Hi,
I am not sure if I understand the target size segment of the code correctly or not.


    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(800, 1280))

My thought are that this is the size of the output image, heat-map. This does not appear to be the case however as the target size is always 640*480 pixels regardless of the input image size or how you change the numbers. Hence my question, is there a way to get the heat-map to be the same dimensions as the input image with no distortion?

I am looking forward to hear back from you. Thanks.

KeyError: 'Strides' With Custom Model

I implemented a binary classifier using DenseNet as a base with some custom layers at the end of the model. I'm using the follow code which gives me: KeyError: "Strides with Custom Model'

model = load_model('densenet_500')
new_model = to_heatmap(model)
idx = 0
display_heatmap(new_model,TEST_DATA[0],idx)

The error comes from to_heatmap():

244 layer = model.layers[index]
245 dic = layer.get_config()
--> 246 atrous_rate = dic["strides"][0]
247 dic["strides"] = (1, 1)
248 new_pool = from_config(layer, dic)

Update to Keras 2

Hi. Thanks for this nice work! It seems this needs to be ported to Keras 2. First thing I saw is replace of conf["output_dim"] with layer.output_shape. I will see if I can do the rest of the thing but probably would be nice if you could take a look also. Thanks.

Example not working: NameError: name 'preprocess_input' is not defined

I tried to run your code found on the 'front' page.
Things started well:

Model type detected: local pooling - flatten
Model cut at layer: 18
Pool size infered: 7

But then I got an error:

--------------------------------------------------
NameError        Traceback (most recent call last)
<ipython-input-3-1c2ccf5b77eb> in <module>()
     27 model = VGG16()
     28 new_model = to_heatmap(model)
---> 29 display_heatmap(new_model, "pics/dog.jpg")

<ipython-input-3-1c2ccf5b77eb> in display_heatmap(new_model, img_path)
     15     x = image.img_to_array(img)
     16     x = np.expand_dims(x, axis=0)
---> 17     x = preprocess_input(x)
     18 
     19     out = new_model.predict(x)

NameError: name 'preprocess_input' is not defined

Can you help me with this? I am using theano 0.9, keras 1.2.2

Error using own Model with non-quadratic images

Hey there,

I use the following model:

def model(train_generator, validation_generator):
    
    training_steps = training_samples // batch_size
    validation_steps = validation_samples // batch_size
    
    input_shape = (img_width, img_height, 3)
    
    model = Sequential()
    model.add(Conv2D(16, (3, 3), input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    
    model.add(Conv2D(32, (3, 3)))
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Conv2D(64, (3, 3)))
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Flatten())
    model.add(Dense(64))
    model.add(Dense(64))
    #model.add(Activation('relu'))
    #model.add(Dropout({{uniform(0, 1)}}))
    model.add(Dense(nb_classes, activation='sigmoid'))
    #model.add(Activation('sigmoid'))
    model.compile(loss=['categorical_crossentropy'],
                  optimizer='adam',
                  metrics=['accuracy'])

    model.fit_generator(
        train_generator,
        steps_per_epoch=training_steps,
        epochs=epochs,
        validation_data=validation_generator,
        validation_steps=validation_steps)

    score, acc = model.evaluate_generator(validation_generator, steps=validation_steps)
    #...
    return model

My img_width and img_height are defined as img_width, img_height = 320, 180. If I try to use the example code, I get an error.

new_model = to_heatmap(model, input_shape=(320, 180, 3))

Error:


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-daa853f5145e> in <module>()
     31 
     32 model = model.model
---> 33 new_model = to_heatmap(model, input_shape=(320, 180, 3))
     34 display_heatmap(new_model, "data/test.jpg", 2)

heatmap\heatmap.py in to_heatmap(model, input_shape)
    254         # If not the last layer of the model.
    255         if index + 2 != len(model.layers) - 1:
--> 256             x = add_reshaped_layer(model.layers[index + 2], x, size, atrous_rate=atrous_rate)
    257         else:
    258             x = add_reshaped_layer(model.layers[index + 2], x, size, atrous_rate=atrous_rate,

heatmap\heatmap.py in add_reshaped_layer(layer, x, size, no_activation, atrous_rate)
    210     x = new_layer(x)
    211     # We transfer the weights:
--> 212     insert_weights(layer, new_layer)
    213     return x
    214 

heatmap\heatmap.py in insert_weights(layer, new_layer)
    155     W, b = layer.get_weights()
    156     ax1, ax2, previous_filter, n_filter = new_layer.get_weights()[0].shape
--> 157     new_W = W.reshape((ax1, ax2, previous_filter, n_filter))
    158     new_W = new_W.transpose((0, 1, 2, 3))
    159 

ValueError: cannot reshape array of size 3112960 into shape (20,20,64,64)

I think, the reshape is somewhat wrong. If I look into the summary of the model I get:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1_input (InputLayer)  (None, 320, 180, 3)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 318, 178, 16)      448       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 159, 89, 16)       0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 157, 87, 32)       4640      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 78, 43, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 76, 41, 64)        18496     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 38, 20, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 48640)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 64)                3113024   
_________________________________________________________________
dense_2 (Dense)              (None, 64)                4160      
_________________________________________________________________
dense_3 (Dense)              (None, 10)                650       
=================================================================
Total params: 3,141,418
Trainable params: 3,141,418
Non-trainable params: 0
_________________________________________________________________

If I calculate 20 x 20 x 64 x 64 this is not 3112960. However, if I look into the network summary, one can see that the size is in fact 38 x 20 x 64 x 64 and this is 3112960. I had a short look into the source code, but it seems to respect the different width and height:

ax1, ax2, previous_filter, n_filter = new_layer.get_weights()[0].shape
new_W = W.reshape((ax1, ax2, previous_filter, n_filter))

If I try to use a quadratic size, i.e. 224 x 224, it works flawlessly.
Am I overlooking something or is this a bug?

Best
hija

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.