Code Monkey home page Code Monkey logo

nn_playground's People

Contributors

dingke avatar edresson avatar ei-grad avatar jessicayung avatar joyfyan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nn_playground's Issues

update to newest version

I am using keras2.0 and want to run qrnn code for sequence classification. the code written in keras 1.2 and When i edit it, it always pop up new issue. would you mind update it to new version? @DingKe

Unused code block in qrnn.py

Last potential bug. It doesn't look like the code in lines 201-205 is ever used because initial_states is always reassigned.

        if initial_state is not None:
            if not isinstance(initial_state, (list, tuple)):
                initial_states = [initial_state]
            else:
                initial_states = list(initial_state)

ValueError: None values not supported.

I'm getting ValueError: None values not supported when using this QuasiRNN implementation with Keras 2.0.5

The following is my code:
`
def qrnn_model(nb_classes=13):
model = Sequential()
model.add(QRNN(128, window_size=3, dropout=0.2, batch_input_shape=(166, 200, 4)))
model.add(Dense(nb_classes, activation='softmax'))
return model

def train(X_train,X_test, y_train, y_test, model, model_name, **kwargs):
with tf.device('/gpu:1'):
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], X_train.shape[2]))
print("Type of input", type(X_train))
print("These many NaNs", np.count_nonzero(pd.isnull(X_train)))
model.fit(X_train, y_train, batch_size=166,validation_split=0.125, epochs=100,verbose=1)
model.save("divyansh_%s.h5" % model_name)
score = model.evaluate(X_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

f = np.load('all_data_just_a_new.npz')
X_train, X_test, y_train, y_test = f['arr_0'], f['arr_1'], f['arr_2'], f['arr_3'],
model = qrnn_model()
train(X_train,X_test, y_train, y_test, model, 'qrnn_model')
`

ternary_tanh

Hi, could you reference the article that described the usage of ternary_tanh as activation function? I did not found any reference in the articles found in code comments. Thank you

Error in running imdb_lm_gcnn.py

Hi,

I'm trying to run imdb_lm_gcnn.py to test your demo.
I use keras 1.2.0 with tensorflow 0.10.0 on ubuntu 14.04
But I got the following error log:

/usr/local/lib/python3.4/dist-packages/keras/engine/topology.py:368: UserWarning: The regularizers property of layers/models is deprecated. Regularization losses are now managed via the losses layer/model property.
warnings.warn('The regularizers property of '
Traceback (most recent call last):
File "imdb_lm_gcnn.py", line 69, in
run_demo()
File "imdb_lm_gcnn.py", line 65, in run_demo
train_model()
File "imdb_lm_gcnn.py", line 43, in train_model
loss='sparse_categorical_crossentropy')
File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 619, in compile
sample_weight, mask)
File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 307, in weighted
score_array = fn(y_true, y_pred)
File "/usr/local/lib/python3.4/dist-packages/keras/objectives.py", line 45, in sparse_categorical_crossentropy
return K.sparse_categorical_crossentropy(y_pred, y_true)
File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 1993, in sparse_categorical_crossentropy
return tf.reshape(res, [-1, int(output_shape[-2])])
TypeError: int returned non-int (type NoneType)

Do you have any ideas to fix it?

use_bias = True Error in binary_layers.py

If I set use_bias = True in binarynet or xnornet, these vars are not defined:
self.output_dim
self.bias_initializer
self.bias_regularizer
self.bias_constraint

according the source code in keras layers/convolutional.py, I modify code as follows:

from keras import regularizers
class BinaryDense(Dense):
    def __init__(self, units, H=1., kernel_lr_multiplier='Glorot', bias_lr_multiplier=None,
                 bias_initializer='zeros', bias_regularizer=None, bias_constraint=None,
                 **kwargs):
        super(BinaryDense, self).__init__(units, **kwargs)
        self.H = H
        self.kernel_lr_multiplier = kernel_lr_multiplier
        self.bias_lr_multiplier = bias_lr_multiplier
        self.bias_initializer = initializers.get(bias_initializer)
        self.bias_regularizer = regularizers.get(bias_regularizer)
        self.bias_constraint = constraints.get(bias_constraint)
        ......
    def build(self, input_shape):
        assert len(input_shape) >= 2
        input_dim = input_shape[1]
        self.output_dim = self.units
       ......
    def get_config(self):
        config = {'H': self.H,
                  'kernel_lr_multiplier': self.kernel_lr_multiplier,
                  'bias_lr_multiplier': self.bias_lr_multiplier,
                  'bias_initializer': initializers.serialize(self.bias_initializer),
                  'bias_regularizer': regularizers.serialize(self.bias_regularizer),
                  'bias_constraint': constraints.serialize(self.bias_constraint)
                  }

so as class BinaryConv2D but set self.output_dim = self.filters

Unable to load binarynet model after saving

Hi

I am using the mnist_mlp.py and after saving the model using model.save I get the following error when I try to load it
Code

from keras.models import load_model

model.save('my_model.h5')

model = load_model('my_model.h5', custom_objects={'DropoutNoScale': DropoutNoScale,'BinaryDense':BinaryDense,
                                                                 'Clip':Clip})

Error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-30-78c6bac33446> in <module>()
      2 # identical to the previous one
      3 model = load_model('my_model.h5', custom_objects={'DropoutNoScale': DropoutNoScale,'BinaryDense':BinaryDense,
----> 4                                                                  'Clip':Clip})

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/models.py in load_model(filepath, custom_objects, compile)
    231             raise ValueError('No model found in config file.')
    232         model_config = json.loads(model_config.decode('utf-8'))
--> 233         model = model_from_config(model_config, custom_objects=custom_objects)
    234 
    235         # set weights

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/models.py in model_from_config(config, custom_objects)
    305                         'Maybe you meant to use '
    306                         '`Sequential.from_config(config)`?')
--> 307     return layer_module.deserialize(config, custom_objects=custom_objects)
    308 
    309 

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/layers/__init__.py in deserialize(config, custom_objects)
     52                                     module_objects=globs,
     53                                     custom_objects=custom_objects,
---> 54                                     printable_module_name='layer')

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    137                 return cls.from_config(config['config'],
    138                                        custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 139                                                            list(custom_objects.items())))
    140             with CustomObjectScope(custom_objects):
    141                 return cls.from_config(config['config'])

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/models.py in from_config(cls, config, custom_objects)
   1207         model = cls()
   1208         for conf in config:
-> 1209             layer = layer_module.deserialize(conf, custom_objects=custom_objects)
   1210             model.add(layer)
   1211         return model

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/layers/__init__.py in deserialize(config, custom_objects)
     52                                     module_objects=globs,
     53                                     custom_objects=custom_objects,
---> 54                                     printable_module_name='layer')

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    139                                                            list(custom_objects.items())))
    140             with CustomObjectScope(custom_objects):
--> 141                 return cls.from_config(config['config'])
    142         else:
    143             # Then `cls` may be a function returning a class.

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/engine/topology.py in from_config(cls, config)
   1240             A layer instance.
   1241         """
-> 1242         return cls(**config)
   1243 
   1244     def count_params(self):

~/Code/BinartNet/binary_layers.py in __init__(self, units, H, kernel_lr_multiplier, bias_lr_multiplier, **kwargs)
     35     '''
     36     def __init__(self, units, H=1., kernel_lr_multiplier='Glorot', bias_lr_multiplier=None, **kwargs):
---> 37         super(BinaryDense, self).__init__(units, **kwargs)
     38         self.H = H
     39         self.kernel_lr_multiplier = kernel_lr_multiplier

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/layers/core.py in __init__(self, units, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs)
    810         self.bias_regularizer = regularizers.get(bias_regularizer)
    811         self.activity_regularizer = regularizers.get(activity_regularizer)
--> 812         self.kernel_constraint = constraints.get(kernel_constraint)
    813         self.bias_constraint = constraints.get(bias_constraint)
    814         self.input_spec = InputSpec(min_ndim=2)

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/constraints.py in get(identifier)
    170         return None
    171     if isinstance(identifier, dict):
--> 172         return deserialize(identifier)
    173     elif isinstance(identifier, six.string_types):
    174         config = {'class_name': str(identifier), 'config': {}}

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/constraints.py in deserialize(config, custom_objects)
    163                                     module_objects=globals(),
    164                                     custom_objects=custom_objects,
--> 165                                     printable_module_name='constraint')
    166 
    167 

/opt/dl/anaconda/envs/chs_keras/lib/python3.5/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    146             custom_objects = custom_objects or {}
    147             with CustomObjectScope(custom_objects):
--> 148                 return cls(**config['config'])
    149     elif isinstance(identifier, six.string_types):
    150         function_name = identifier

TypeError: __init__() got an unexpected keyword argument 'name'

It looks like it is unable to deserialize the Clip object. What would be the recommend solution here?
Thanks.

BNN saved model weights

When I use model.save() and model.get_weights(), the saved weight values are not binary.
I'm not sure if it's my miss.
Really thanks for any reply:)

BinaryConv1D

Hi Ke Ding,

really appreciate your work. I've been trying to write a BinaryConv1D class to work with text data using your BinaryConv2D as an example, but I always get an error "Input 0 is incompatible with layer binary_conv1d_1: expected ndim=4, found ndim=3".

Could you please give any suggestions on how to write a BinaryConv1D class and if there's anything that needs to be modified in existing classes?

XNOR-Net output size problem

Hello, 最近在看你GitHub上的XNOR-Net部分,有个不理解的地方,想问一下你。
我运行了你的代码,但是发现inputs经过卷积层后,图片尺寸没有变化。然后总的参数是13,895,848个。不太懂为什么在这里图片经过卷积层,尺寸没有变化.
Screen Shot 2020-03-24 at 12 13 44 AM
Screen Shot 2020-03-24 at 12 13 55 AM

GCNN

Hi Ke,
thank you very much for writing and publishing your code! I am loving this!

I am trying to run the GCNN, but was not successful. I converted it to Keras 1.2.0 first, by changing the add_weights commands in gcnn.py, see at the bottom.

But still, I got the message:

AssertionError: Can't store in size_t for the bytes requested 18446744073709551615 * 4

The network looks like this:

____________________________________________________________________________________________________
embedding_1 (Embedding)          (None, 400, 100)      0           word_input[0][0]
____________________________________________________________________________________________________
gcnn10 (GCNN)                    (None, 400, 30)       54090       embedding_1[0][0]
____________________________________________________________________________________________________
queue_input (InputLayer)         (None, 31)            0
____________________________________________________________________________________________________
flatten_1 (Flatten)              (None, 12000)         0           gcnn10[0][0]
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 512)           16384       queue_input[0][0]
____________________________________________________________________________________________________

Could you be so kind as to look into that?

Thanks again and kind regards
Ernst

Fixes for Keras 1.2.0 in gcnn.py

        self.W_z = self.init(self.W_shape, name='{}_W_z'.format(self.name))
        self.W_f = self.init(self.W_shape, name='{}_W_f'.format(self.name))
        self.W_o = self.init(self.W_shape, name='{}_W_o'.format(self.name))
        self.trainable_weights = [self.W_z, self.W_f, self.W_o]
        self.W = K.concatenate([self.W_z, self.W_f, self.W_o], 
        if self.bias:
            #self.b = self.add_weight((self.output_dim * 2,),
            #                         initializer='zero',
            #                         name='{}_b'.format(self.name),
            #                         regularizer=self.b_regularizer,
            #                         constraint=self.b_constraint)
            self.b_z = K.zeros((self.output_dim,), name='{}_b_z'.format(self.name))
            self.b_f = K.zeros((self.output_dim,), name='{}_b_f'.format(self.name))
            self.b_o = K.zeros((self.output_dim,), name='{}_b_o'.format(self.name))
            self.trainable_weights += [self.b_z, self.b_f, self.b_o]
            self.b = K.concatenate([self.b_z, self.b_f, self.b_o])

binarynet weights are not binary after trained

Hi,

I thought we should expect the weights are binary after we train the binary net, is that right? But after run the mnist_cnn.py, and output the weights by model.get_weights(), the weights are still floating point numbers. Is this a bug? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.