Code Monkey home page Code Monkey logo

deepbach's People

Contributors

ajeetdsouza avatar andreasjansson avatar bzamecnik avatar ericguizzo avatar ghadjeres avatar namin avatar tbazin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepbach's Issues

DeepBach access rights issue

This command seems to link to a non existing page:

git clone [email protected]:SonyCSL-Paris/DeepBach.git
cd DeepBach
conda env create -f environment.yml

When I use it I get this message:

_Last login: Fri Feb 23 10:11:20 on console
mac-pro-van-arnold-veeman:~ arnoldveeman$ git clone [email protected]:SonyCSL-Paris/DeepBach.git
Cloning into 'DeepBach'...
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
mac-pro-van-arnold-veeman:~ arnoldveeman$ cd DeepBach
-bash: cd: DeepBach: No such file or directory
mac-pro-van-arnold-veeman:~ arnoldveeman$ conda env create -f environment.yml_

Can someone help me out in a comprehensive way?

Thank you

voice_model.py line 217: self.dataset has no data_loaders function

**** Edit ** : I figured that MusicDataset was a parent class for ChoraleDataset. I apologize for the confusion.**

Hello, thank you for the great update of the PyTorch version of DeepBach.

I was just looking through the code, and for voice_model.py line 217 (in "train_model" function):

(dataloader_train, dataloader_val, dataloader_test) = self.dataset.data_loaders( batch_size=batch_size,)

"self.dataset" uses "ChoraleDataset" class from DatasetManager.chorale_dataset.py, but the class
has no "data_loaders" function. I found that "data_loaders" function is located in "MusicDataset" class DatasetManager.music_dataset.py.

Would it be correct to import MusicDataset class as well in voice_models.py & use data_loaders function from it?

Thank you so much.

Fixing soprano part at generation time

Hi again! Since I didn't find the script for fixing the soprano part at generation time as described in the paper, I wrote up the following code, where I pass tensor_chorales from test_dataloader into the generation() method and set voice_index_range to [1,3].

train_dataloader, val_dataloader, test_dataloader = bach_chorales_dataset.data_loaders(batch_size=128, split=(0.85, 0.10))

for tensor_chorale_batch, tensor_metadata_batch in test_dataloader:
    tensor_chorale_batch = cuda_variable(tensor_chorale_batch).long().clone().to('cpu')
    tensor_metadata_batch = cuda_variable(tensor_metadata_batch).long().clone().to('cpu')
    for i in range(tensor_chorale_batch.size(0)):
        tensor_chorale = tensor_chorale_batch[i]
        tensor_metadata = tensor_metadata_batch[i]

        score, tensor_chorale, tensor_metadata = deepbach.generation(
            num_iterations=num_iterations,
            sequence_length_ticks=sequence_length_ticks,
            tensor_chorale=tensor_chorale,
            tensor_metadata=tensor_metadata,
            voice_index_range=[1, 3],
        )

However, there are a couple problems with this that make me suspect I'm on a different track than the paper.

  1. All of the examples in the tensor dataset are segments 8 beats in length that start at any possible offset (in 16th-note increments) from an original chorale. This means that in the generated segments of 8 beats, the entire chorale is shifted such that the note onsets don't appear on actual beats.
  2. The generation() method doesn't seem to randomly initialize voices that we apply the pseudo-Gibbs algorithm to, although it does do this for the timestep range.
  3. It seems like generations in the paper were longer than 8 beats, since it was possible to extract two 12-second segments.

Probably we shouldn't be using test_dataloader, but I'm not sure where else the test data comes from. Thanks again!

ValueError

I follow your introduction, but when I run the command: python3 deepBach.py --ext big -t 30 --timesteps 32 -u 512 256 -d 256 -b 16 in server, it shows ValueError: Error when checking model input: expected left_features to have shape (None, 16, 245) but got array with shape (16, 32, 245). So I guess if the input shape in the code is wrong. Would you mind to tell me what should I do if it is convenient. Thank you!

352 MIDI chorales after discarding voice divisions

Hi!

How can I recreate the 352 MIDI chorales after discarding ones with voice divisions? Are these readily available online or from the code? If not, could you point me to the set of 389 MIDI files that you originally started with? I've been searching online and am not sure which one is the reliable one that you used, as some don't have fermata information in them.

Update: I grabbed MIDIs from this repo, and retrieved chorales indexed by Kalmus, as your DeepBach paper suggests. (Aside for anyone else trying to recreate this: in their files.txt, there are 419 non-empty entries for Kalmus catalog numbers, but of those 419, there are only 395 that have valid corresponding file names in the directory.)
Of those 395, I parsed with music21.converter.parse and kept only those with 4 parts, and the number remaining was 352. I hope this isn't a coincidence!

Unfortunately, these still don't have fermata information, so I'm still wondering about your original process!

Thanks!

models_zoo.py:deepBach has not been updated for Keras 2

In b8f6049 and other commits there the model code for upgraded for compatibility with Keras 2. However in this commit some necessary changes are missing, in particular the merge() function was replaced by add() or concatenate() only in the deepbach_skip_connections model, but not in deepBach. This results in the first example command in README to fail.

more parts for instruments?

hi I am a composer and a programmar(more like a composer, cuz I suck at math).
TensorFlow backend somehow didn't work so I am moving to Theano and still debugging. Anyways, is it possible to make this model more general like it contains all instrument lines at the same time(eg. Violin I, Violin II, Oboe I, Oboe II, Violas, Cellos)? I thought if one composition didn't have one of the instruments we just leave it blank and embeds with blank?? will this possibly be working, cuz if so, it will be able to generate random orchestra compositions in no time lol.

Empty training split for small dataset - incorrect rounding

In case there's only one valid chorale read from a directory of MIDI files, when loading the pickled dataset it fails with:

DeepBach/data_utils.py in generator_from_raw_dataset(batch_size, timesteps, voice_index, phase, percentage_train, pickled_dataset, transpose)
    365 
    366     while True:
--> 367         chorale_index = np.random.choice(chorale_indices)
    368         extended_chorale = np.transpose(X[chorale_index])
    369         chorale_metas = X_metadatas[chorale_index]

mtrand.pyx in mtrand.RandomState.choice (numpy/random/mtrand/mtrand.c:17200)()

ValueError: a must be non-empty

The problem that the training split size calculation is not correct.

int(len(X) * percentage_train) for len(X) == 1 and percentage_train == 0.8 calculates to 0.8 rounded down to 0. Then np.random.choice([]) fails.

A more proper way would be with round():

training_size = int(round(len(X) * percentage_train))

Still with dataset of size 1, the test split would be empty. So eg. for percentage_train == 0.8 the minimum dataset size for non-empty both training and test split would be 3.

I do not work flask

Hello
I'm not a programmer, I'm a musician. I want my students to analyze some compositions made by deepbach. t first I did not function Deepbach, but when I tried the deepbach of Sony CSL Paris, it worked great.
It took me a while for deepbach to do something. At least I got him to work searching for extra information on the internet. It is not easy. Everything I have done since Anaconda.

But what I have not gotten is that it functions plugin for Musescore. I install flask from anaconda, I import it
from python. Since I use windows 8.1 (i5, x64), I discovered later that "export" does not work on windows, I used "set" instead. So I put this:

(tensorflow) C: \ Users \ Boss \ DeepBach> set FLASK_APP = plugin_flask_server.py

(tensorflow) C: \ Users \ Boss \ DeepBach> flask run

  • Serving Flask app "plugin_flask_server.py"
  • Environment: production
    WARNING: Do not use the development server in a production environment.
    Use a WSGI server production instead.
  • Debug mode: off
    music21: Certain music21 functions might need the optional package matplotlib;
    If you run into errors, install it by following the instructionsns at
    http://mit.edu/music21/doc/installing/installAdditional.html
    Using Theano backend.
    WARNING (theano.configdefaults): g ++ not available, if using conda: conda install m2w64-toolchain C: \ ProgramData \ Anaconda3 \ envs \ tensorflow \ lib \ site-packages \ theano \ configdefaults .py: 560: UserWarning: DeprecationWarning: there is no c ++ compiler.This is deprecated and with Theano 0.11 a c ++ compiler will be mandatory warnings.warn ("DeprecationWarning: there is no c ++ compiler." WARNING (theano.configdefaults): g ++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. WARNING (theano.configdefaults): install mkl with conda install mkl-service`: N
    or module named 'mkl'
    WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions
    Usage: flask run [OPTIONS]

Error: While importing "DeepBach.plugin_flask_server", an ImportError was raised
:
Traceback (most recent call last):
File "C: \ ProgramData \ Anaconda3 \ envs \ tensorflow \ lib \ site-packages \ flask \ cli.py"
, line 235, in locate_app
__import __ (module_name)
File "C: \ Users \ Boss \ DeepBach \ plugin_flask_server.py", line 13, in
from DeepBach.model_manager import load_models
File "C: \ Users \ Boss \ DeepBach \ model_manager.py", line 13, in
from .data_utils import generator_from_raw_dataset, BACH_DATASET,
ImportError: can not import name 'PACKAGE_DIR'

If I try to use the plugin, it does not work. I do not know what I will be doing wrong.

Neither I work the configuration that puts music21. I had to configure it in another way:
python -m music21.configure

I have observed that the value -l is increased from 600 to 1000, the result is dissonant. It gives good results
between 400 and 200, but the length in bars is smaller.
I have also tried to harmonize a melody, but the result is other than expected, it harmonizes it in a minor tone..

Greetings and congratulations for deepbach

Problems with harmonising supplied soprano melody

Following instructions in the Readme file I tried the command
python3 deepBach.py -m melody1.mid -p -i 10000
with the attached midi file. The results were garbage: totally dissonant. I take it there is something wrong with either a) the settings I used or b) the midi file. Some help here would be appreciated.

melody1.mid.zip

AssertionError when use another database

Dear Professor:

I follow your introduction, and have stored mid files with the same number of voices in data folder. When I run the command: python3 deepBach.py --dataset data/ --ext dowland -t 30 --timesteps 32 -u 256 256 -d 256 -b 32 in server, it shows

Traceback (most recent call last):
File "deepBach.py", line 183, in
main()
File "deepBach.py", line 156, in main
metadatas=metadatas, timesteps=timesteps)
File "/home/DeepBach-master/DeepBach/model_manager.py", line 644, in create_models
labels) = next(gen)
File "/home/DeepBach-master/DeepBach/data_utils.py", line 410, in generator_from_raw_dataset
chorale_indices) > 0, "The list of chorales for the phase '%s' must not be empty" % phase
AssertionError: The list of chorales for the phase 'train' must not be empty.

I print the training_size and total_size in data_utils.py, and it appears 0. I don't know where is wrong. Would you mind to tell me what should I do if it is convenient. I'm very grateful to you for your help.

'Chord' object has no attribute 'pitch'

I was able to process the included "God Save the Queen" sample (by running python3 deepBach.py -l 100). But when I try to run DeepBach with another midi (from musescore) I get an: AttributeError: 'Chord' object has no attribute 'pitch'

What's the problem with those?

And thanks for the great work on this one btw. :)

must create a folder called raw_dataset

Traceback (most recent call last):
  File "deepBach.py", line 847, in <module>
    main()
  File "deepBach.py", line 775, in main
    voice_ids=[0, 1, 2, 3])
  File "X:\Library\DeepBach\data_utils.py", line 591, in initialization
    metadatas=metadatas)
  File "X:\Develop\Library\DeepBach\data_utils.py", line 247, in make_dataset
    pickle.dump(dataset, open(dataset_name, 'wb'), pickle.HIGHEST_PROTOCOL)
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/raw_dataset/bach_dataset.pickle'

or you will waste your time creating a dataset and then get this damn error

The code is not working as expected as told by the README. What should I do?

Input:

python3 deepBach.py --dataset /Users/shyamalsuhanachandra/websites/MIDI\ Repository/www.piano-midi.de/midis/chopin/ --ext dowland -t 30 --timesteps 32 -u 256 256 -d 256 -b 32
Output:

Creating dataset
Warning: SLUR_SYMBOL used in standard_note
Warning: SLUR_SYMBOL used in standard_note
Warning: SLUR_SYMBOL used in standard_note
Warning: SLUR_SYMBOL used in standard_note
0it [00:00, ?it/s]
Traceback (most recent call last):
  File "deepBach.py", line 847, in <module>
    main()
  File "deepBach.py", line 775, in main
    voice_ids=[0, 1, 2, 3])
  File "/Users/shyamalsuhanachandra/DeepBach/data_utils.py", line 582, in initialization
    metadatas=metadatas)
  File "/Users/shyamalsuhanachandra/DeepBach/data_utils.py", line 247, in make_dataset
    pickle.dump(dataset, open(dataset_name, 'wb'), pickle.HIGHEST_PROTOCOL)
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/custom_dataset/.pickle'

What should I do?

Error appears when I am trying to train model on my dataset

Hello!
Thank you for interesting project and code!

I have a problem when I am trying to train model on my dataset. My dataset is monophonic and has only one voice. I selected NUM_VOICES parameter in data_utils equals to 1.
I created "custom_dataset" folder inside "datasets" folder, put my dataset inside and use this command:
python3 deepBach.py --dataset /Users/YagfarovRauf/PycharmProjects/DeepBach-master/datasets/custom_dataset/mydataset -t 15 --timesteps 32 -b 64

Then all my files got FloatingKeyException and following error occurs:

FloatingKeyException: File /Users/YagfarovRauf/PycharmProjects/DeepBach-master/datasets/custom_dataset/mydataset/mono99.mid skipped
FloatingKeyException: File /Users/YagfarovRauf/PycharmProjects/DeepBach-master/datasets/custom_dataset/mydataset/mono99.mid skipped
FloatingKeyException: File /Users/YagfarovRauf/PycharmProjects/DeepBach-master/datasets/custom_dataset/mydataset/mono99.mid skipped
FloatingKeyException: File /Users/YagfarovRauf/PycharmProjects/DeepBach-master/datasets/custom_dataset/mydataset/mono99.mid skipped
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 99/99 [02:25<00:00,  1.36s/it]
0 files written in datasets/custom_dataset/mydataset.pickle
/usr/local/lib/python3.5/site-packages/numpy/core/numeric.py:301: FutureWarning: in the future, full((160,), 8) will return an array of dtype('int64')
  format(shape, fill_value, array(fill_value).dtype), FutureWarning)
Traceback (most recent call last):
  File "deepBach.py", line 847, in <module>
    if __name__ == '__main__':
  File "deepBach.py", line 826, in main
    create_models(model_name, create_new=overwrite, num_units_lstm=num_units_lstm, num_dense=num_dense,
  File "deepBach.py", line 588, in create_models
    labels) = next(gen)
  File "/Users/YagfarovRauf/PycharmProjects/DeepBach-master/data_utils.py", line 365, in generator_from_raw_dataset
    chorale_index = np.random.choice(chorale_indices)
  File "mtrand.pyx", line 1397, in mtrand.RandomState.choice (numpy/random/mtrand/mtrand.c:15477)
ValueError: a must be non-empty

follow up on the youtube video last year

Hi Gaëtan, Last year you replied on a comment of me on your YouTube video.
This is what I wrote:

Dear Gaëtan, can you make a comprehensive, step by step video on how to install this wonderful invention of yours? I think I almost got it, but for instance - when I paste this code into Terminal:

shell sudo apt install musescore python -c 'import music21; music21.environment.set("musicxmlPath", "/usr/bin/musescore")

I get all kinds of error messages:

apt: invalid flag: install Usage: apt where apt options include: -classpath Specify where to find user class files and annotation processor factories etc. etc.

I really want to be able to try this out. Already I got it open in Musescore, but I could not do much for it gave this Parse error:

Debug: on run called Debug: calling endpoint models Debug: SyntaxError: JSON.parse: Parse error

Probably I need to call some files from somewhere else, but I don't know how to do this. Once explained properly I will never forget it ;-)

So, can you take the effort? That would be wonderful! Thank you in advanced. Sincerely, Arnold Veeman

Generating a Beethoven dataset_cache

Maybe I am missing something but I was wondering if there is a way to change the Datasets and Tensor datasets to something else, like Beethoven. Is there a script to use with sourcefiles?

python ImportError: attempted relative import with no known parent package

Hello Gaetan,

for installation on a mac, I followed the instructions of http://hansekbrand.se/code/DeepBachOSX.html and this seemed to work. Except I had to install tensorflow1.8 because it didn't find the recommended version 1.5. Also I had to install not the latest version of python, which is now 3.7 but 3.6. After that, all the dependencies were installed without any error. Fine, so far.

But when I run:
python3 -m flask run --host=0.0.0.0

I got the following python related error, which is well discussed in the web, but I'm unable solve it: "ImportError: attempted relative import with no known parent package".
I copied the whole terminal message, but the interesting part is the second half :

 * Serving Flask app "/Users/myUserName/src/DeepBach-master/plugin_flask_server.py"
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
Using TensorFlow backend.
Usage: python -m flask run [OPTIONS]

Error: While importing "plugin_flask_server", an ImportError was raised:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/cli.py", line 235, in locate_app
    __import__(module_name)
  File "/Users/myUserName/src/DeepBach-master/plugin_flask_server.py", line 182, in <module>
    open(pickled_dataset, 'rb'))
  File "/Users/myUserName/src/DeepBach-master/DeepBach/metadata.py", line 5, in <module>
    from .data_utils import SUBDIVISION
ImportError: attempted relative import with no known parent package

and here is what i did, one after another:

export PYTHONPATH=/Users/myUserName/src/DeepBach-master/DeepBach

python3

import music21
us=music21.environment.UserSettings()
us['musicxmlPath']='/Applications/MuseScore 2.app'
exit()

# Start Musescore 2 manually, by hand !!!

# server
cd /Users/myUserName/src/DeepBach-master/DeepBach
export FLASK_APP=/Users/myUserName/src/DeepBach-master/plugin_flask_server.py

python3 -m flask run --host=0.0.0.0

the last message triggers the error.

Here is the structure of my deepbach installation:
mydeepbachstructure

Trouble Using the Musescore Plugin

I'm on windows 10. After much fiddling, I've gotten DeepBach installed (yay!) and have been able to compose very well by running deepbach.py. However, I'm now trying to use the musescore plugin to get interactive composition.

I've moved the deepBachMuseScore plugin file to my MuseScore plugin directory, and that allows it to load well, but there's no models to choose from, and no matter what I put in for the server adress, none will load. I have models in my DeepBach folder, but that's in a completely separate place from my Musescore plugin files. But just putting a model file into the musescore plugin folder doesn't work either, no matter what I put in for the "server address" (http://localhost:5000/, a direct path to the directory of the model, etc).

How do I load a model into musescore?

EDIT: Alright, I've done some more work and determined I need to set it up with flask to use the musescore plugin. But the flask plugin is only available in the original_keras branch of the git and not in the master version - furthermore, it's not compatible with the current data_utils.py.

Am I missing something, or is the master branch not set up to work as a flask app and therefore to use with musescore?

Finish Keras 2 upgrade

Some parts of the code were upgraded to the new API, but not completely. Let's finish that.

Warnings:

models_zoo.py:29: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(input_dim=172, name="embedding_left", units=200)`
  output_dim=num_dense, name='embedding_left')
models_zoo.py:31: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(input_dim=172, name="embedding_right", units=200)`
  output_dim=num_dense, name='embedding_right')
models_zoo.py:62: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=[<tf.Tenso..., outputs=Tensor("pi...)`
  output=pitch_prediction)
deepBach.py:687: UserWarning: The semantics of the Keras 2 argument  `steps_per_epoch` is not the same as the Keras 1 argument `samples_per_epoch`. `steps_per_epoch` is the number of batches to draw from the generator at each epoch. Update your method calls accordingly.
  validation_steps=validation_steps)
deepBach.py:687: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<generator..., epochs=5, verbose=1, validation_data=<generator..., validation_steps=20, steps_per_epoch=500)`
  validation_steps=validation_steps)

Chorale harmonisation with given soprano melody - key?

Running this command on a midi file containing a simple melody in C major, using the pre-trained model available on DropBox,

python3 deepBach.py -m midi/file/path.mid -p 16 -i 20000

gives me a harmonisation in A minor. Also if I try other keys like G major the results just sound strange. Is there some way to tell DeepBach what key I want to use? I cannot see this in the options.

no module named 'metadata' when trying to use pretrained data bach_dataset.pickle

After cloning the current master branch, installing all dependencies, downloading the pretrained models with download_pretrained_data.sh (which btw. throws the error that the target directories are not empty), moving the data with mv deepbach_ressources/datasets/raw_dataset/bach_dataset.pickle DeepBach/datasets/raw_dataset I get the following error when trying to generate a sample

$ python deepBach.py -l 100 -o output.mid                                                                                    
/usr/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Namespace(batch_size_train=128, dataset='', ext='', length=100, midi_file=None, name='deepbach', num_dense=200, num_iterations=20000, num_units_lstm=[200, 200], output_file='output.mid', overwrite=False, parallel=1, reharmonization=None, steps_per_epoch=500, timesteps=16, train=0, validation_steps=20)
Traceback (most recent call last):
  File "deepBach.py", line 183, in <module>
    main()
  File "deepBach.py", line 102, in main
    'rb'))
ModuleNotFoundError: No module named 'metadata'

The error does not occur, if I remove bach_dataset.pickle again.

Windows key error 306

Hi, don't know if this is compatible with windows, but I installed all the python modules and tried out the pretrained model which worked out fine at first (got many "warning ... added to dictionary" errors, but still), but after about an hour or so I got "KeyError with chorale 306_" in an endless loop.

Any idea why?

Problem with Installation High Sierra

I'm completely new in this world, but I'm Musicologist and Historically Informed Performance violin player. And I've decided to try this program with some Renaissance music. I've followed installation requirements but when I typed:
python3 deepBach.py -l 100

this message appeared:

Using TensorFlow backend.
Namespace(batch_size_train=128, dataset='', ext='', length=100, midi_file=None, name='deepbach', num_dense=200, num_iterations=20000, num_units_lstm=[200, 200], output_file='', overwrite=False, parallel=1, reharmonization=None, steps_per_epoch=500, timesteps=16, train=0, validation_steps=20)
Creating dataset
Traceback (most recent call last):
  File "deepBach.py", line 183, in <module>
    main()
  File "deepBach.py", line 97, in main
    voice_ids=[0, 1, 2, 3])
  File "/Users/screpach/DeepBach/DeepBach/data_utils.py", line 658, in initialization
    corpus.getBachChorales(fileExtensions='xml'))
AttributeError: module 'music21.corpus' has no attribute 'getBachChorales'

How can I make it work, or can you give me more detalles information about installation, please?

Installed on my comp:

MacOS: 10.13.2
Tensorflow: 1.4.1 (w/ SSE4.1, SSE4.2, AVX, AVX2, FMA) from: lakshayg/tensorflow-build
Flask: 0.12
Music21: v5.0.5a2
Keras: 2.1.2

generated output contains only sixteenth notes

Hi,
Thank you for sharing your DeepBach code.

I managed to generate some output with your model using the standard settings:
python3 deepBach.py -l 100

However, the output contains only notes or pauses of duration 'sixteenth'. This pattern persists when I increase the number of iterations to 100 000.

Many of the consecutive notes actually form sequences of the same pitch. Therefore might there be a problem in translating the hold symbol '__' back to note duration?

I actually had to install an older version of tensorflow to get keras 1.2.0 working with a tensorflow backend. I downloaded version 0.11.0. Can you share the version you are using?

making pickle file from new midi files

Thanx for the great repository.

I'd like to examine DeepBach with my own MIDI dataset,
but the way pickle file is made is a bit difficult to comprehend.

Could you please let us know as to how to convert MIDI files to the pickle file to be used,
e.g. what kind of preprocessing is needed, how to organize them, etc?

Thanx a bunch.

RuntimeError: CUDA error: device-side assert triggered

I tried "python deepBach.py --train", the following Error message came:
FileNotFoundError: [Errno 2] No such file or directory: '/home/gaetan/Public/Python/workspace/DatasetManager/DatasetManager/dataset_cache/tensor_datasets'

I added "self.cache_dir = '/DeepBach/DatasetManager/dataset_cache'"(line 149), at the function "tensor_dataset_filepath(self)"(line 148~), in music_dataset.py
Then another Error message came:
Traceback (most recent call last):
File "deepBach.py", line 95, in
main()
File "/anaconda3/lib/python3.7/site-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/anaconda3/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/anaconda3/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/anaconda3/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "deepBach.py", line 80, in main
num_epochs=num_epochs)
File "/DeepBach/DeepBach/model_manager.py", line 70, in train
self.train(main_voice_index=voice_index, **kwargs)
File "/DeepBach/DeepBach/model_manager.py", line 76, in train
voice_model.train_model(optimizer=optimizer, **kwargs)
File "/DeepBach/DeepBach/voice_model.py", line 224, in train_model
phase='train')
File "/DeepBach/DeepBach/voice_model.py", line 261, in loss_and_acc
weights = self.forward(notes, metas)
File "/DeepBach/DeepBach/voice_model.py", line 113, in forward
lstm_hidden_size=self.lstm_hidden_size,
File "/DeepBach/DeepBach/helpers.py", line 28, in init_hidden
volatile=volatile),
File "/DeepBach/DeepBach/helpers.py", line 11, in cuda_variable
return Variable(tensor.cuda(), volatile=volatile)
RuntimeError: CUDA error: device-side assert triggered

What should I do?

Ubuntu 16.04, Python 3.7.2, torch 1.0.0

Handle metadata selection in a proper way

It is possible to choose which metadata to use when creating the dataset for the first time.

# fixed set of metadatas to use when CREATING the dataset
# metadatas = [FermataMetadatas(), KeyMetadatas(window_size=1), TickMetadatas(SUBDIVISION), ModeMetadatas()]
 metadatas = [TickMetadatas(SUBDIVISION), FermataMetadatas(), KeyMetadatas(window_size=1)]

Choosing which metadata to use can allow a faster creation of the dataset (it's actually the Key and Mode metadata which are longer to compute) and different generations.

Musescore plugin works, but cannot compose

I have got the Musescore plugin to work, it can list models and load a model, but when I select a section of a four part chorale and press "Compose", the server fails with the following error:

[2018-02-22 18:58:03,513] ERROR in app: Exception on /compose [POST]
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/hans/DeepBach/plugin_flask_server.py", line 217, in compose
    input_chorale = converter.parse(file.name)
  File "/usr/local/lib/python2.7/dist-packages/music21/converter/__init__.py", line 1110, in parse
    forceSource=forceSource, **keywords)
  File "/usr/local/lib/python2.7/dist-packages/music21/converter/__init__.py", line 998, in parseFile
    v.parseFile(fp, number=number, format=format, forceSource=forceSource, **keywords)
  File "/usr/local/lib/python2.7/dist-packages/music21/converter/__init__.py", line 533, in parseFile
    self.parseFileNoPickle(fp, number, format, forceSource, **keywords)
  File "/usr/local/lib/python2.7/dist-packages/music21/converter/__init__.py", line 467, in parseFileNoPickle
    self.subConverter.parseFile(fp, number=number, **keywords)
  File "/usr/local/lib/python2.7/dist-packages/music21/converter/subConverters.py", line 785, in parseFile
    c.readFile(fp)
  File "/usr/local/lib/python2.7/dist-packages/music21/musicxml/xmlToM21.py", line 692, in readFile
    etree = ET.parse(filename)
  File "<string>", line 62, in parse
  File "<string>", line 38, in parse
ParseError: unclosed token: line 2111, column 4
127.0.0.1 - - [22/Feb/2018 18:58:03] "POST /compose HTTP/1.1" 500 -

Output as *.mid file parameters for a generations of a chorale of length 100

Hi,

I tried the following command after installing all the dependencies and the DeepBach code:

python3 deepBach.py -l 100 -o deepBach.mid

Here is the error I get:

Traceback (most recent call last):
  File "deepBach.py", line 847, in <module>
    main()
  File "deepBach.py", line 775, in main
    voice_ids=[0, 1, 2, 3])
  File "/Users/shyamalsuhanachandra/DeepBach/data_utils.py", line 582, in initialization
    metadatas=metadatas)
  File "/Users/shyamalsuhanachandra/DeepBach/data_utils.py", line 247, in make_dataset
    pickle.dump(dataset, open(dataset_name, 'wb'), pickle.HIGHEST_PROTOCOL)
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/raw_dataset/bach_dataset.pickle'
I see the following code inside deepBach.py:

It seems that the dataset/raw_dataset/back_dataset.pickle is missing. Will you be providing that file in the future? If there is no file, it uses the constant BACH_DATASET instead and fails.

    if output_file:
        mf = midi.translate.music21ObjectToMidiFile(score)
        mf.open(output_file, 'wb')
        mf.write()
        mf.close()
        print("File " + output_file + " written")

I checked out the indexed_chorale_to_score function from data_utils library before this snippet of code above. The code for that function is the following:

def indexed_chorale_to_score(seq, pickled_dataset):
    _, _, _, index2notes, note2indexes, _ = pickle.load(open(pickled_dataset, 'rb'))
    num_pitches = list(map(len, index2notes))
    slur_indexes = list(map(lambda d: d[SLUR_SYMBOL], note2indexes))

    score = stream.Score()
    for voice_index, v in enumerate(seq):
        part = stream.Part(id='part' + str(voice_index))
        dur = 0
        f = note.Rest()
        for k, n in enumerate(v):
            # if it is a played note
            if not n == slur_indexes[voice_index]:
                # add previous note
                if dur > 0:
                    f.duration = duration.Duration(dur / SUBDIVISION)
                    part.append(f)

                dur = 1
                f = standard_note(index2notes[voice_index][n])
            else:
                dur += 1
        # add last note
        f.duration = duration.Duration(dur / SUBDIVISION)
        part.append(f)
        score.insert(part)
    return score

There is also a function that converts the score into midi that uses the output of the previous function for the score variable on line 55 of deepBach.py. Do I need to make any mods?

def music21ObjectToMidiFile(music21Object):
    '''
    Either calls streamToMidiFile on the music21Object or
    puts a copy of that object into a Stream (so as
    not to change activeSites, etc.) and calls streamToMidiFile on
    that object.
    '''
    classes = music21Object.classes
    if 'Stream' in classes:
        return streamToMidiFile(music21Object)
    else:
        m21ObjectCopy = copy.deepcopy(music21Object)
        s = stream.Stream()
        s.insert(0, m21ObjectCopy)
        return streamToMidiFile(s) 

What should I do? Can you provide the pretrained dataset or a dataset of MIDIs?

Running on Mac OS 10.12.2 error

I followed the installation instructions in the Readme. When I run the following command
python3 deepBach.py -l 100
I get this error

FSPathMakeRef(/Applications/Finale Notepad 2014.app) failed with error -43.

Please help.

Ubuntu problems with data setup

Interesting module, the basic setup is causing me some problems. In ubuntu:

conda env create --name deepbach_pytorch -f environment.yml

SpecNotFound: Invalid name, try the format: user/package

And if I skip that, then

the next examples, e.g.

python3 deepBach.py
Using TensorFlow backend.
ends up to:
  File "/mnt/c/dev/2019/DeepBach/data_utils.py", line 584, in initialization
    chorale_list = filter_file_list(corpus.getBachChorales(fileExtensions='xml'))
AttributeError: module 'music21.corpus' has no attribute 'getBachChorales'

so some data seems to be missing, and wondering where I could get those.

Datasets in dataset_cache

We noticed that only the dataset in dataset_cache/tensor_datasets/ is required to train the model and generate new chorales. However, the provided dataset in tensor_datasets/ in the zip file is named ChoraleDataset([0],bach_chorales,['fermata', 'tick', 'key'],8,4), indicating that it only contains the soprano voice.

If this dataset is used for training, should it not contain all four voices? Otherwise if it is used to fix the soprano part at generation time, it seems from our manual observation of the generated chorales that all notes are being sampled, and the soprano part are not real Lutheran melodies.

Also, what is the difference in purpose between the datasets in the datasets/ and tensor_datasets/ folder?

Thank you so much!

Input midi file got TypeError: part_to_inputs() missing 1 required positional argument: 'length'

Hi,
First, thank you for the great work!
I followed the first a few examples to train the models and got some outputs with default raw dataset. Now, I would like to input my own .mid file but face the error:

Traceback (most recent call last):
File "deepBach.py", line 183, in
main()
File "deepBach.py", line 125, in main
note2index=note2indexes[0])
TypeError: part_to_inputs() missing 1 required positional argument: 'length'

I noticed #43 also encountered the same problem but there seems not to have new commits after the issue. I am wondering if any updates will be available soon?
Also, I found that #6 also tried to supplied customer melody but was able to get valid outputs. May I know if some older versions of the project work? Would you suggest using an old version to get it work with -m option for now?

Thank you!

TypeError: part_to_inputs() takes exactly 4 arguments (3 given)

part_to_inputs() requires four arguments, but it is called with only three arguments at line 125-126 in deepBach.py

The length argument seems to be missing, see data_utils.py line 142.

I tried to insert len(melody) as a second argument to part_to_inputs(), and deepBach.py then ran without errors, but the resulting xml-file was not correct.

Two questions

Hello. I am a music composer and computer lover, however I wish my programming skill were up to the level of understanding ML and your code. First of all congratulations for the project and all your activities!

I was able to set up everything in Linux and have a working model trained for 15 epochs that produces some nice outputs (not always).

My first question is: how many epochs should it be trained to produce similar output as the examples available in the webpage?

My second question is: is there any chance that the model could be also used to expand chorales to chorale preludes as Bach did in his Organ Chorale Preludes? I think it could be a really nice task to see if there is a Deep Learning model that could realize such task of expanding music by motive elaboration from a chorale to a more complex instrumental piece.

Thanks in advance!!

Could you please tell me the following numbers represent what that means in the chorale? Thank you.

array([11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11], dtype=int32)]

Training epochs

I have implemented some code to update the best model so far if the validation loss at the end of an epoch is lower than the previous epoch. Even so, I was wondering about the number of epochs that you trained the original DeepBach model on, so that we make sure that we can replicate your results.

Thank you!

Missing module

h5py is a required module but isn't included in requirements.txt. I am using Theano on CentOS with python3.4. Manually installing it with pip3 worked fine.

I also get an error:

    timesteps = int(models[0].input[0]._shape[1])
AttributeError: 'TensorVariable' object has no attribute '_shape'

I'm presuming this is a problem with the theano versus tensorflow backend. I wasn't able to find a workaround, although setting this to 0 seemed to make everything run.

And don't forget to add a link to the paper to README.txt! Haven't had a chance to really dig into it yet, but the results I listened to were great!

EDIT: Any chance of instructions on how to train on alternative datasets?

the example of Generate chorale harmonization on the pretrained model

I extracted pretrained model files to DeepBach/models/ and DeepBach/datasets/raw_dataset/ , then sample code " Generate a chorale of length 100" runs well.
But, after running the example of Generate chorale harmonization with soprano extracted from midi

python3 deepBach.py -m -p

the terminal shows errors below
Using TensorFlow backend.
Namespace(batch_size_train=128, dataset='', ext='', length=160, midi_file='datasets/god_save_the_queen.mid', name='deepbach', num_dense=200, num_iterations=20000, num_units_lstm=[200, 200], output_file='', overwrite=False, parallel=16, reharmonization=None, steps_per_epoch=500, timesteps=16, train=0, validation_steps=20)
model models/deepbach_0 loaded
model models/deepbach_1 loaded
model models/deepbach_2 loaded
model models/deepbach_3 loaded
Traceback (most recent call last):
File "deepBach.py", line 847, in
main()
File "deepBach.py", line 843, in main
pickled_dataset=pickled_dataset)
File "deepBach.py", line 35, in generation
parallel_updates=True, pickled_dataset=pickled_dataset)
File "deepBach.py", line 224, in parallel_gibbs
seq[timesteps:-timesteps, 0] = melody
ValueError: cannot copy sequence with size 168 to array axis with dimension 160

Thanks
Joe

Wrong array dim for np.full(padding_dimensions, end_symbols)

np.full(padding_dimensions, end_symbols) in generator_from_raw_dataset() fails with ValueError: could not broadcast input array from shape (4) into shape (16)

my data:

  • start_symbols == (4,)
  • padding_dimensions==(16,)
  • extended_chorale == (4,) <-- wrong

original data:

  • extended_chorale.shape == (256, 4) <-- OK
  • padding_dimensions == (16, 4)

cause: arrays in list X must be 2D, some were 1D

Stacktrace:

DeepBach/deepBach.py in main()
    824     if not os.path.exists('models/' + model_name + '_' + str(NUM_VOICES - 1) + '.yaml'):
    825         create_models(model_name, create_new=overwrite, num_units_lstm=num_units_lstm, num_dense=num_dense,
--> 826                       pickled_dataset=pickled_dataset, num_voices=num_voices, metadatas=metadatas, timesteps=timesteps)
    827     if train:
    828         models = train_models(model_name=model_name, steps_per_epoch=steps_per_epoch, num_epochs=num_epochs,

DeepBach/deepBach.py in create_models(model_name, create_new, num_dense, num_units_lstm, pickled_dataset, num_voices, metadatas, timesteps)
    585              right_features),
    586             (left_metas, central_metas, right_metas),
--> 587             labels) = next(gen)
    588 
    589         if 'deepbach' in model_name:

DeepBach/data_utils.py in generator_from_raw_dataset(batch_size, timesteps, voice_index, phase, percentage_train, pickled_dataset, transpose)
    394         end_symbols = np.array(list(map(lambda note2index: note2index[END_SYMBOL], note2indexes)))
    395 
--> 396         extended_chorale = np.concatenate((np.full(padding_dimensions, start_symbols),
    397                                            extended_chorale,
    398                                            np.full(padding_dimensions, end_symbols)),

/Users/bzamecnik/anaconda/lib/python3.4/site-packages/numpy/core/numeric.py in full(shape, fill_value, dtype, order)
    301         dtype = array(fill_value).dtype
    302     a = empty(shape, dtype, order)
--> 303     multiarray.copyto(a, fill_value, casting='unsafe')
    304     return a
    305 

ValueError: could not broadcast input array from shape (4) into shape (16)

Installing

Hi all!

I have a small budget if someone can help me to install the DeepBach plugin via TeamViewer or something like this. I tried myself, first time in github, first time opening terminal and I
have been able to install Music21 and other things but it's not really for users so finally I'll need some help...

Please write me if you can help, I would love to experiment with DeepBach. My email: [email protected]

Thanks!

Trying to create "HelloWorld" script for DeepBach

I'm trying to get DeepBach to minimally work on Ubuntu... sort of like "Hello World" for DeepBach. When I did, I got a score which sounds random... cacophonous, inharmonious, no sense of melody.

I suspect I'm misunderstanding something fundamental... if you could point me in the right direction, I would be most grateful.

Here is what I did on my Ubuntu 16.04 system with Python 3.5.2.

Get musescore
sudo add-apt-repository ppa:mscore-ubuntu/mscore-stable
sudo apt-get update
sudo apt-get install musescore

get Python tools
sudo apt-get install python-pip python-dev
sudo apt-get install python3-pip
pip3 install --upgrade pip

get DeepBach
git clone https://github.com/Ghadjeres/DeepBach
cd DeepBach

Install prerequisites.
h5py is required too; I specified the most recent one.
Note tensorflow 1.x does not work with keras 1.2 so I used TF 0.12.

echo "tensorflow==0.12.0" >> requirements.txt
echo "h5py==2.7.0" >> requirements.txt
sudo pip3 install -r requirements.txt

Fix a glitch: deepBach writes to this dir but dies if it's not already there
mkdir models

Tell music21 about MuseScore
python3

import music21
us=music21.environment.UserSettings()
us['musicxmlPath']='/usr/bin/musescore'
exit()

Now run the first example from README
python3 deepBach.py -l 100

And voila, after a long time it comes back in MuseScore, with some "music"! But it's rapid, chaotic, random stuff. Should it be? How could I fix it?

Thanks

--- Dan

PS I could zip up the MuseScore (.mscz) file and attach it here, if that would help.

Why do you carry out transpose? Thank you!

if transpose:
midi_pitches = [[n.pitch.midi for n in chorale.parts[voice_id].flat.notes] for voice_id in voice_ids]
min_midi_pitches_current = np.array([min(l) for l in midi_pitches])
max_midi_pitches_current = np.array([max(l) for l in midi_pitches])
min_transposition = max(min_midi_pitches - min_midi_pitches_current)
max_transposition = min(max_midi_pitches - max_midi_pitches_current)
for semi_tone in range(min_transposition, max_transposition + 1):
try:
interval_type, interval_nature = interval.convertSemitoneToSpecifierGeneric(semi_tone)
transposition_interval = interval.Interval(str(interval_nature) + interval_type)
chorale_tranposed = chorale.transpose(transposition_interval)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.