Code Monkey home page Code Monkey logo

music_rnn_rbm's Introduction

Note: this is described in detail here: http://danshiebler.com/2016-08-17-musical-tensorflow-part-two-the-rnn-rbm/

Music_RNN_RBM

This repository contains code for generating long sequences of polyphonic music by using an RNN_RBM in TensorFlow.

TLDR:

You can generate music by cloning the directory and running:

python rnn_rbm_generate.py parameter_checkpoints/pretrained.ckpt

This will populate the music_outputs directory with midi files that you can play with an application like GuitarBand.

Training

To train the model, first run the following command to initialize the parameters of the RBM.

python weight_initializations.py

Then, run the following command to train the RNN_RBM model:

python rnn_rbm_train.py <num_epochs>

num_epochs can be any integer. Set it between 50-500, depending on the hyperparameters.

Generation:

The command:

python rnn_rbm_generate.py <path_to_ckpt_file>

will generate music by using the weights stored in the path_to_ckpt_file. You can use the provided file parameter_checkpoints/pretrained.ckpt, or you can use one of the ckpt files that you create. When you run train_rnn_rbm.py, the model creates a epoch_<x>.ckpt file in the parameter_checkpoints directory every couple of epochs.

music_rnn_rbm's People

Contributors

dshieble avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

music_rnn_rbm's Issues

where is the time_steps,iterations and music,they don't be define in this function

I used

music = tf.concat([music, x_out],0)

And

loop_vars = [time_steps, iterations, U, u_t, x, music]
[_, _, _, _, _, music] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, generate_recurrence,
loop_vars, shape_invariants=[time_steps.get_shape(), iterations.get_shape(),
U.get_shape(), u_t.get_shape(),x.get_shape(), tf.TensorShape([None, None])])

Originally posted by @Guije2018 in #12 (comment)

How to convert midi to msgpack please?

Hello Dan,

in file rbm_chords.py
songs = get_songs('Pop_Music_Midi') #These songs have already been converted from midi to msgpack

How to convert midi to msgpack please?

Many thanks to all,
Dorje

TypeError: while_loop() got an unexpected keyword argument 'shape_invariants'

0%|                                                                         | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "rnn_rbm_generate.py", line 48, in <module>
    main(sys.argv[1])
  File "rnn_rbm_generate.py", line 43, in main
    generated_music = sess.run(generate(300), feed_dict={x: song_primer}) #Prime the network with song primer and generate an original song
  File "/Users/roberthecimovic/Documents/Code/Github/poly-rnn/rnn_rbm.py", line 109, in generate
    x.get_shape(), tf.TensorShape([None, 780])])
TypeError: while_loop() got an unexpected keyword argument 'shape_invariants'
(venv) FAIL: 1

I had this working and then I decided to do a clean install of python and am getting stuck on the last generation step, when before this wasn't a problem at all. No idea what I'm getting wrong.

ValueError: The shape for while_1/Merge_2:0 is not an invariant for the loop. It enters the loop with shape (1, 100), but has shape (?, 100) after one iteration. Provide shape invariants using either the `shape_invariants` argument of tf.while_loop or set_shape() on the loop variables

I tried running your code with slight tweaks which are no way related to this part but this doesnt seem to be working .

`
Uarr = tf.scan(rnn_recurrence, x, initializer=u0)

    U = Uarr[int(np.floor(prime_length / midi_manipulation.num_timesteps)), :, :]

    time_steps = tf.constant(1, tf.int32)

    iterations = tf.constant(num)

    u_t = tf.zeros([1, n_visible], tf.float32)

    music = tf.zeros([1, n_visible], tf.float32)


    loop_vars = [time_steps, iterations, U, u_t, x, music]
    [_, _, _, _, _, music] = tf.while_loop(lambda count, num_iter, *args: count < num_iter, generate_recurrence,
                                           loop_vars, shape_invariants=[time_steps, iterations,
                                                             U, u_t,x, tf.TensorShape([1, 15600])])

    return music`

IOError: [Errno 2] No such file or directory: 'music_outputs/0_Every Time We Touch - Chorus.midi.mid'

Traceback (most recent call last): File "rnn_rbm_generate.py", line 45, in <module> main(sys.argv[1]) File "rnn_rbm_generate.py", line 42, in main midi_manipulation.write_song(new_song_path, generated_music) File "/home/gloomyghost/rbm/Music_RNN_RBM/midi_manipulation.py", line 15, in write_song noteStateMatrixToMidi(song, name=path) File "/home/gloomyghost/rbm/Music_RNN_RBM/midi_manipulation.py", line 138, in noteStateMatrixToMidi midi.write_midifile("{}.mid".format(name), pattern) File "/usr/local/lib/python2.7/dist-packages/midi/fileio.py", line 150, in write_midifile midifile = open(midifile, 'wb') IOError: [Errno 2] No such file or directory: 'music_outputs/0_Every Time We Touch - Chorus.midi.mid'

no such file

Hi there, I'm having an issue. I changed the initial midi files given in the git, and now when I try to generate music it bring

python rnn_rbm_generate.py parameter_checkpoints/epoch_149.ckpt

Traceback (most recent call last):
File "rnn_rbm_generate.py", line 49, in
main(sys)
File "rnn_rbm_generate.py", line 32, in main
song_primer = midi_manipulation.get_song(primer_song)
File "/media/aram/Mordigan/Happy_Music_Generator-master/Music_RNN_RBM/midi_manipulation.py", line 19, in get_song
song = np.array(midiToNoteStateMatrix(path))
File "/media/aram/Mordigan/Happy_Music_Generator-master/Music_RNN_RBM/midi_manipulation.py", line 37, in midiToNoteStateMatrix
pattern = midi.read_midifile(midifile)
File "/usr/local/lib/python2.7/dist-packages/midi/fileio.py", line 156, in read_midifile
midifile = open(midifile, 'rb')
IOError: [Errno 2] No such file or directory: 'Pop_Music_Midi/Every Time We Touch - Chorus.midi'

There should not be such a file, and I cant get how to fix this, any help?

ValueError: Shapes (2, 1, 780) and () are incompatible

def generate(num, x=x, size_bt=size_bt, u0=u0, n_visible=n_visible, prime_length=100):
    """
        This function handles generating music. This function is one of the outputs of the build_rnnrbm function
        Args:
            num (int): The number of timesteps to generate
            x (tf.placeholder): The data vector. We can use feed_dict to set this to the music primer. 
            size_bt (tf.float32): The batch size
            u0 (tf.Variable): The initial state of the RNN
            n_visible (int): The size of the data vectors
            prime_length (int): The number of timesteps into the primer song that we use befoe beginning to generate music
        Returns:
            The generated music, as a tf.Tensor

    """
    Uarr = tf.scan(rnn_recurrence, x, initializer=u0)
    U = Uarr[int(np.floor(prime_length/midi_manipulation.num_timesteps)), :, :]
    [_, _, _, _, _, music] = control_flow_ops.while_loop(lambda count, num_iter, *args: count < num_iter,
                                                     generate_recurrence, [tf.constant(1, tf.int32), tf.constant(num), U,
                                                     tf.zeros([1, n_visible], tf.float32), x,
                                                     tf.zeros([1, n_visible],  tf.float32)])
    return music

Error:
Traceback (most recent call last):
File "rnn_rbm_generate.py", line 45, in
main(sys.argv[1])
File "rnn_rbm_generate.py", line 40, in main
generated_music = sess.run(generate(300), feed_dict={x: song_primer}) #Prime the network with song primer and generate an original song
File "C:\Users\HP\Downloads\music-github\Music_RNN_RBM\rnn_rbm.py", line 96, in generate
tf.zeros([1, n_visible], tf.float32)])
File "C:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3291, in while_loop
return_same_structure)
File "C:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3004, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "C:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2939, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "C:\Users\HP\Downloads\music-github\Music_RNN_RBM\rnn_rbm.py", line 74, in generate_recurrence
music = tf.concat(0, [music, x_out])
File "C:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1122, in concat
tensor_shape.scalar())
File "C:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 848, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (2, 1, 780) and () are incompatible

Cannot Generate Music?! class AbstractEvent(object,metaclass=AutoRegister): SyntaxError: invalid syntax`

python2 rnn_rbm_generate.py /Users/humankhoobsirat/Desktop/MusicAI/Music_RNN_RBM-master/parameter_checkpoints/pretrained.ckpt


Traceback (most recent call last):
  File "rnn_rbm_generate.py", line 12, in <module>
    import rnn_rbm
  File "/Users/humankhoobsirat/Desktop/MusicAI/Music_RNN_RBM-master/rnn_rbm.py", line 9, in <module>
    import midi_manipulation
  File "/Users/humankhoobsirat/Desktop/MusicAI/Music_RNN_RBM-master/midi_manipulation.py", line 1, in <module>
    import midi
  File "/usr/local/lib/python2.7/site-packages/midi/__init__.py", line 2, in <module>
    from .events import *
  File "/usr/local/lib/python2.7/site-packages/midi/events.py", line 27
    class AbstractEvent(object,metaclass=AutoRegister):
                                        ^
SyntaxError: invalid syntax

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.