Code Monkey home page Code Monkey logo

gtzan.keras's People

Contributors

dependabot[bot] avatar galloj avatar hguimaraes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gtzan.keras's Issues

missing evaluate_test ?

hello,when I run gtzan.py,I found evaluate_test method has missing in your code,I view issues,found some guy has raised this question.Until now,you had not add this method, Can you add this method as soon as you can ? thanks you very much

gtzan.py

Hello,

I do not see gtzan.py here.

module version

Could you mention the version of python module you used for this repo (scikit-learn, tensorflow, keras, etc...?)

Hello, I have some questions

Hello, I am very interested in your code. Similarly, I am also a student in the school. Can you support the paper behind this project? If so, I want to read this paper

[1.1-custom_cnn_2d.ipynb] model.fit_generator() generates an error.

I established Python 3.6.4 , Conda 3, Tensorflow 2.0, Keras 2.3.1 on ubuntu 18.04.
Then, I ran "ubuntu18.04$ pip install -r ./gtzan.keras/requirements.txt" command to avoid unexpected situations.

At first, I modified the below statement in 1.1-custom_cnn_2d.ipynb file.

def conv_block(x, n_filters, pool_size=(2, 2)):
    x = Conv2D(n_filters, (3, 3), strides=(1, 1), padding='same')(x)
    x = Activation('relu')(x)
    #x = MaxPooling2D(pool_size=pool_size, strides=pool_size)(x)    
    x = MaxPooling2D(pool_size=pool_size, strides=pool_size, padding='same')(x)    
    x = Dropout(0.25)(x)
    return x

However, when I run in the Jupyter notebook environment, I got the below error message in the model.fit_generator() statement. Could you give me a hint to fix this issue? Any hints will be helpful to me. :)

  • code
hist = model.fit_generator(
    train_generator,
    steps_per_epoch=steps_per_epoch,
    validation_data=validation_generator,
    validation_steps=val_steps,
    epochs=150,
    verbose=1,
    callbacks=[reduceLROnPlat]
)
  • debug message
----------------------------------------------------------
IndexError               Traceback (most recent call last)
<ipython-input-146-763fdcb20cdf> in <module>
      6     epochs=150,
      7     verbose=1,
----> 8     callbacks=[reduceLROnPlat]
      9 )

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py in new_func(*args, **kwargs)
    322               'in a future version' if date is None else ('after %s' % date),
    323               instructions)
--> 324       return func(*args, **kwargs)
    325     return tf_decorator.make_decorator(
    326         func, new_func, 'deprecated',

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1304                          workers=1,
   1305                          use_multiprocessing=False,
-> 1306                          verbose=0):
   1307     """Evaluates the model on a data generator.
   1308 

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817     _keras_api_gauge.get_cell('evaluate').set(True)
    818     self._assert_compile_was_called()
--> 819     self._check_call_args('evaluate')
    820 
    821     func = self._select_training_loop(x)

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    233         # TODO(b/139762795): Add step inference for when steps is None to
    234         # prevent end of sequence warning message.
--> 235         steps_per_epoch = training_data_adapter.get_size()
    236 
    237       # tf.print('{} on {} steps.'.format(ModeKeys.TRAIN, steps_per_epoch))

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    591         class_weight=class_weights,
    592         batch_size=batch_size,
--> 593         check_steps=False,
    594         steps=steps)
    595   adapter = adapter_cls(

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, shuffle, workers, use_multiprocessing, max_queue_size, **kwargs)

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, workers, use_multiprocessing, max_queue_size, **kwargs)

~/anaconda3/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in _peek_and_restore(x)

<ipython-input-38-647b4e41efc7> in __getitem__(self, index)
     17         # Apply data augmentation
     18         if not self.is_test:
---> 19             signals = self.__augment(signals)
     20         return signals, self.y[index*self.batch_size:(index+1)*self.batch_size]
     21 

<ipython-input-38-647b4e41efc7> in __augment(self, signals, hor_flip, random_cutout)
     34                 cols = np.random.randint(signal.shape[0], size=4)
     35                 signal[lines, :, :] = -80 # dB
---> 36                 signal[:, cols, :] = -80 # dB
     37 
     38             spectrograms.append(signal)

IndexError: index 58 is out of bounds for axis 1 with size 2

Model is not saving in models folder

Hello,
My training is successfully executed and after that when i go for testing i dont see any additional files in my model folder. Can you please help me urgently because this is my Final year project.
Thanks in advance

always predict "classical" in real world record wav

Hi, I just test your model with wav that record by real world mic, and convert to 16k samplerate wav.
I record the song in the dataset, and env is very quiet, but what ever genres music I record, your model always predict "classical"
It seems the model is overfit too much.

Value error.

I am getting value error "zero size array to reduction operation maximum which has no identity"

Error gtzan.data

Hello and sorry to disturb you again but I am getting an error again which is shown in the photo. Please help me as this is my final year project.
Thank you.
Capture

Where is evaluate_test?

In gtzan.py, for using it as test "evaluate_test" is missing, am I missing something out, or has it not been provided?

Train/Test split creation remark

Shouldn't you split the files list in train + test before extracting the samples? You are doing the opposite. Extracting all the samples/features and then proceed to split. In this way you can have some portions of song 1 into train split and some other portions of song 1 into test split. This can lead to an invalid validation score. And this doesn't seem very well to me.

I've tried splitting the files list as I've suggested and the model still learns, but the val_loss and val_acc are much lower than you obtain.

What are your thoughts about this?

can not evalute the test?

There is a error?
File "gtzan.py", line 115, in main
evaluate_test(model, X)
NameError: name 'evaluate_test' is not defined
kindly update the code.

Many thanks

Error, Model not defined

After cloning and running, I get this error:

Traceback (most recent call last): File "gtzan.py", line 131, in <module> main(args) File "gtzan.py", line 75, in main cnn = build_model(input_shape, num_genres) File "C:\Users\lewys\PycharmProjects\genre-classification\gtzan.keras\src\gtzan\model.py", line 24, in build_model model = Model(inputs=vgg16.input, outputs=top(vgg16.output))

Should be a quick fix with a simple from keras import Model in src\gtzan\model.py

Maybe you created this on a different version of Keras than I have installed? Could you please update the repo with a conda environment.yml file or similar to show which packages at which versions you have installed? Thanks

training error no attribute 'outbound_nodes'

I ran the training but it gives me the error below. Actually, I have tensorflow 1.13.1. Does this cause the issue? I had to change the requirements.txt file to get this running. If this may be the problem, I'll consider downgrading. Thanks for any help.

Colocations handled automatically by placer.
Traceback (most recent call last):
File "gtzan.py", line 131, in
main(args)
File "gtzan.py", line 75, in main
cnn = build_model(input_shape, num_genres)
File "/Users/matthias/dev/GitHub/gtzan.keras/src/gtzan/model.py", line 16, in build_model
input_tensor=input_tensor)
File "/usr/local/lib/python3.7/site-packages/keras_applications/vgg16.py", line 116, in VGG16
name='block1_conv1')(img_input)
File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 584, in call
inputs, outputs, args, kwargs)
File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1416, in set_connectivity_metadata
input_tensors=inputs, output_tensors=outputs, arguments=kwargs)
File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1524, in _add_inbound_node
arguments=arguments)
File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1742, in init
layer.outbound_nodes.append(self)
AttributeError: 'InputLayer' object has no attribute 'outbound_nodes'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.