Code Monkey home page Code Monkey logo

musegan's Introduction

MuseGAN

MuseGAN is a project on music generation. In a nutshell, we aim to generate polyphonic music of multiple tracks (instruments). The proposed models are able to generate music either from scratch, or by accompanying a track given a priori by the user.

We train the model with training data collected from Lakh Pianoroll Dataset to generate pop song phrases consisting of bass, drums, guitar, piano and strings tracks.

Sample results are available here.

Important Notes

  • The latest implementation is based on the network architectures presented in BinaryMuseGAN, where the temporal structure is handled by 3D convolutional layers. The advantage of this design is its smaller network size, while the disadvantage is its reduced controllability, e.g., capability of feeding different latent variables for different measures or tracks.
  • The original code we used for running the experiments in the paper can be found in the v1 folder.
  • Looking for a PyTorch version? Check out this repository.

Prerequisites

Below we assume the working directory is the repository root.

Install dependencies

  • Using pipenv (recommended)

    Make sure pipenv is installed. (If not, simply run pip install pipenv.)

    # Install the dependencies
    pipenv install
    # Activate the virtual environment
    pipenv shell
  • Using pip

    # Install the dependencies
    pip install -r requirements.txt

Prepare training data

The training data is collected from Lakh Pianoroll Dataset (LPD), a new multitrack pianoroll dataset.

# Download the training data
./scripts/download_data.sh
# Store the training data to shared memory
./scripts/process_data.sh

You can also download the training data manually (train_x_lpd_5_phr.npz).

As pianoroll matrices are generally sparse, we store only the indices of nonzero elements and the array shape into a npz file to save space, and later restore the original array. To save some training data data into this format, simply run np.savez_compressed("data.npz", shape=data.shape, nonzero=data.nonzero())

Scripts

We provide several shell scripts for easy managing the experiments. (See here for a detailed documentation.)

Below we assume the working directory is the repository root.

Train a new model

  1. Run the following command to set up a new experiment with default settings.

    # Set up a new experiment
    ./scripts/setup_exp.sh "./exp/my_experiment/" "Some notes on my experiment"
  2. Modify the configuration and model parameter files for experimental settings.

  3. You can either train the model:

    # Train the model
    ./scripts/run_train.sh "./exp/my_experiment/" "0"

    or run the experiment (training + inference + interpolation):

    # Run the experiment
    ./scripts/run_exp.sh "./exp/my_experiment/" "0"

Collect training data

Run the following command to collect training data from MIDI files.

# Collect training data
./scripts/collect_data.sh "./midi_dir/" "data/train.npy"

Use pretrained models

  1. Download pretrained models

    # Download the pretrained models
    ./scripts/download_models.sh

    You can also download the pretrained models manually (pretrained_models.tar.gz).

  2. You can either perform inference from a trained model:

    # Run inference from a pretrained model
    ./scripts/run_inference.sh "./exp/default/" "0"

    or perform interpolation from a trained model:

    # Run interpolation from a pretrained model
    ./scripts/run_interpolation.sh "./exp/default/" "0"

Outputs

By default, samples will be generated alongside the training. You can disable this behavior by setting save_samples_steps to zero in the configuration file (config.yaml). The generated will be stored in the following three formats by default.

  • .npy: raw numpy arrays
  • .png: image files
  • .npz: multitrack pianoroll files that can be loaded by the Pypianoroll package

You can disable saving in a specific format by setting save_array_samples, save_image_samples and save_pianoroll_samples to False in the configuration file.

The generated pianorolls are stored in .npz format to save space and processing time. You can use the following code to write them into MIDI files.

from pypianoroll import Multitrack

m = Multitrack('./test.npz')
m.write('./test.mid')

Sample Results

Some sample results can be found in ./exp/ directory. More samples can be downloaded from the following links.

Citing

Please cite the following paper if you use the code provided in this repository.

Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang, "MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment," AAAI Conference on Artificial Intelligence (AAAI), 2018. (*equal contribution)
[homepage] [arXiv] [paper] [slides] [code]

Papers

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang (*equal contribution)
AAAI Conference on Artificial Intelligence (AAAI), 2018.
[homepage] [arXiv] [paper] [slides] [code]

Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
Hao-Wen Dong and Yi-Hsuan Yang
International Society for Music Information Retrieval Conference (ISMIR), 2018.
[homepage] [video] [paper] [slides] [slides (long)] [poster] [arXiv] [code]

MuseGAN: Demonstration of a Convolutional GAN Based Model for Generating Multi-track Piano-rolls
Hao-Wen Dong*, Wen-Yi Hsiao*, Li-Chia Yang and Yi-Hsuan Yang (*equal contribution)
ISMIR Late-Breaking Demos, 2017.
[paper] [poster]

musegan's People

Contributors

dependabot[bot] avatar fjunqueira avatar george-ogden avatar nicholaschiang avatar razzaghnoori avatar salu133445 avatar tai271828 avatar ttorkar avatar vishwanath1306 avatar wayne391 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

musegan's Issues

How to use the pre-trained models?

I am unable to figure out how to generate music with the help of pre-trained models by looking at the code. Can anyone tell me what changes do I have to make in the code for that?

How to make track conditional model work?

Hi!
I wanted to know the steps to make track conditional model work, as in if I provide one instrument's melody of my own, how will the model generate new music around it?
Or is there a way it could generate new music around a multi-track melody?
Thanks.

Dataset confusions

What is the difference between x_lpd-5_phr_v2.npy, train_x_lpd_5_phr.npz lastfm_alternative_5b_phrase.npy and lastfm_alternative_5b_phrase.npy ? I could not find the first one(x_lpd-5_phr_v2.npy) anymore anywhere.

what tensorflow version should be installed to run main.py?

Hello , anyone runs codes successfully ? My env is tf == 1.2.0, when I run main.py , I met error as follows, maybe wrong tf version... Thanks a lot firstly~~

AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'leaky_relu'

the issues of dataset for cleaning data

In the setting.yaml in preprocessing ,I saw lmd_dir and lpd_dir. I'm confusing about it ,how should I do to prepare the midi data to do data clean?
Thank you very much

Didn't run it

I want to run it,but when I ran the'main.py' that say
Traceback (most recent call last):
File "main.py", line 6, in
from config import CONFIG
File "/media/gfq/dataset/museGAN/musegan/config.py", line 306, in
SETUP['preset_d'])))
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
ImportError: No module named discriminator.proposed

the preprocessing of train dataset

Hi ,I don't know how to get 'tra_X_phrase_all' in main.py, I run the preprocess code and I only get many npz files. What I should do to get the 'tra_X_phrase_all' file?
Thank you very much!

KeyError: 'is_accompaniment'

$ ./scripts/run_inference.sh "./exp/default/" "0"
musegan.inference    INFO     Using parameters:
{'beat_resolution': 12,
 'data_shape': [4, 48, 84, 5],
 'is_conditional': False,
 'latent_dim': 128,
 'nets': {'discriminator': 'default', 'generator': 'default'},
 'use_binary_neurons': False}
musegan.inference    INFO     Using configurations:
{'adam': {'beta1': 0.5, 'beta2': 0.9},
 'batch_size': 64,
 'checkpoint_dir': './exp/default//model',
 'colormap': [[1.0, 0.0, 0.0],
              [1.0, 0.5, 0.0],
              [0.0, 1.0, 0.0],
              [0.0, 0.0, 1.0],
              [0.0, 0.5, 1.0]],
 'columns': 5,
 'config': './exp/default//config.yaml',
 'data_filename': 'train_x_lpd_5_phr',
 'data_root': None,
 'data_source': 'sa',
 'evaluate_steps': 100,
 'gan_loss_type': 'wasserstein',
 'gpu': '0',
 'initial_learning_rate': 0.001,
 'learning_rate_schedule': {'end': 50000, 'end_value': 0.0, 'start': 45000},
 'log_loss_steps': 100,
 'lower': -2,
 'midi': {'is_drums': [1, 0, 0, 0, 0],
          'lowest_pitch': 24,
          'programs': [0, 0, 25, 33, 48],
          'tempo': 100},
 'n_dis_updates_per_gen_update': 5,
 'n_jobs': 20,
 'params': './exp/default//params.yaml',
 'result_dir': './exp/default//results/inference',
 'rows': 5,
 'runs': 10,
 'sample_grid': [8, 8],
 'save_array_samples': True,
 'save_checkpoint_steps': 10000,
 'save_image_samples': True,
 'save_pianoroll_samples': True,
 'save_samples_steps': 100,
 'save_summaries_steps': 0,
 'slope_schedule': {'end': 50000, 'end_value': 5.0, 'start': 10000},
 'steps': 50000,
 'upper': 2,
 'use_gradient_penalties': True,
 'use_learning_rate_decay': True,
 'use_random_transpose': False,
 'use_slope_annealing': False,
 'use_train_test_split': False}
musegan.model        INFO     Building model.
Traceback (most recent call last):
  File "/data00/home/zhangyonghui.98k/musegan/scripts/../src/inference.py", line 163, in <module>
    main()
  File "/data00/home/zhangyonghui.98k/musegan/scripts/../src/inference.py", line 100, in main
    model = Model(params)
  File "/data00/home/zhangyonghui.98k/musegan/src/musegan/model.py", line 37, in __init__
    if params['is_accompaniment']:
KeyError: 'is_accompaniment'

Can not download the training data set.

The paper is very good and I want to learn it.. However I tried to download the training data through the scripts, and it failed because the network is unreachable. And then I tried to download it through the links https://drive.google.com/uc?export=download&id=1F7J5n9uOPqViBYpoPT5GvE4PjCWhOyWc or https://drive.google.com/uc?export=download&id=1x3CeSqE6ElWa6V7ueNl8FKPFmMoyu4ED, but it still doesn't work because of the network, and I have used several VPN, still so. Can you send to the Baidu Netdisk or some tools else available so that I can download it? Thank you a lot!

Bus Error : totalMemory: 11.17GiB freeMemory: 11.10GiB

I'm getting a memory bus error on trying to load in train_x_lpd_5_phr.npz, even when attempting to load a pretrained model.

musegan.interpolation INFO     Using parameters:
{'beat_resolution': 12,
 'condition_track_idx': 3,
 'data_shape': [4, 48, 84, 5],
 'is_accompaniment': True,
 'is_conditional': False,
 'latent_dim': 128,
 'nets': {'discriminator': 'default', 'generator': 'accompaniment'},
 'use_binary_neurons': False}
musegan.interpolation INFO     Using configurations:
{'adam': {'beta1': 0.5, 'beta2': 0.9},
 'batch_size': 64,
 'checkpoint_dir': './musegan/exp/accompaniment/bass/model',
 'colormap': [[1.0, 0.0, 0.0],
              [1.0, 0.5, 0.0],
              [0.0, 1.0, 0.0],
              [0.0, 0.0, 1.0],
              [0.0, 0.5, 1.0]],
 'columns': 5,
 'config': './musegan/exp/accompaniment/bass/config.yaml',
 'data_filename': 'train_x_lpd_5_phr',
 'data_root': None,
 'data_source': 'sa',
 'evaluate_steps': 100,
 'gan_loss_type': 'wasserstein',
 'gpu': '0',
 'initial_learning_rate': 0.001,
 'learning_rate_schedule': {'end': 50000, 'end_value': 0.0, 'start': 45000},
 'log_loss_steps': 100,
 'lower': 0.0,
 'midi': {'is_drums': [1, 0, 0, 0, 0],
          'lowest_pitch': 24,
          'programs': [0, 0, 25, 33, 48],
          'tempo': 100},
 'mode': 'lerp',
 'n_dis_updates_per_gen_update': 5,
 'n_jobs': 20,
 'params': './musegan/exp/accompaniment/bass/params.yaml',
 'result_dir': './musegan/exp/accompaniment/bass/results/interpolation',
 'rows': 5,
 'runs': 10,
 'sample_grid': [8, 8],
 'save_array_samples': True,
 'save_checkpoint_steps': 10000,
 'save_image_samples': True,
 'save_pianoroll_samples': True,
 'save_samples_steps': 100,
 'save_summaries_steps': 0,
 'slope_schedule': {'end': 50000, 'end_value': 5.0, 'start': 10000},
 'steps': 50000,
 'upper': 1.0,
 'use_gradient_penalties': True,
 'use_learning_rate_decay': True,
 'use_random_transpose': False,
 'use_slope_annealing': False,
 'use_train_test_split': False}
musegan.model        INFO     Building model.
musegan.model        INFO     Building training nodes.
musegan.model        INFO     Building losses.
musegan.model        INFO     Building training ops.
musegan.model        INFO     Building summaries.
musegan.model        INFO     Building prediction nodes.
2018-11-05 22:08:23.124707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-11-05 22:08:23.125223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2018-11-05 22:08:23.125265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-11-05 22:08:23.549934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-05 22:08:23.550032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2018-11-05 22:08:23.550059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2018-11-05 22:08:23.550423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
musegan.interpolation INFO     Restoring the latest checkpoint.
INFO:tensorflow:Restoring parameters from /content/musegan/exp/accompaniment/bass/model/model.ckpt-300450
tensorflow           INFO     Restoring parameters from /content/musegan/exp/accompaniment/bass/model/model.ckpt-300450
./musegan/scripts/run_interpolation.sh: line 24:   694 Bus error               (core dumped) python3 "$DIR/../src/interpolation.py" --checkpoint_dir "$1/model" --result_dir "$1/results/interpolation" --params "$1/params.yaml" --config "$1/config.yaml" --lower 0.0 --upper 1.0 --runs 10 --gpu "$gpu"

training results

hello, how can i achieve same results on your official website? The current training results are not continuous in time. Can you help me? what should I do? Looking forward to your reply. thx

Track Conditional Generation issue

The model for track conditional generation requires an encoder with the bar generator as present in v1. Can you please suggest how to do it with present MuseGAN.

Why do my evaluation indicators show a "nan" value?

I ran your musegan code for a total of 10 rounds. The effect is not particularly good. Is this normal?

At the same time, I loaded the indicator matrix and wanted to see the indicator score, but the "nan" value appeared.

At the same time, I still have some questions sent to your google mailbox.

The loaded indicator content is as follows (Is this normal?):

{'score_matrix_mean': array([[ 0.6484375 , 0.6796875 , 0. , 0. ,
0.3125 , 0.7578125 , 0.8203125 , 0.8515625 ],
[ 5.33333333, 13.56097561, 9.3046875 , 4.53125 ,
12.71590909, 12.12903226, 10.34782609, 10.63157895],
[ nan, 0.56565087, 0.42536851, 0.48969937,
0.4198412 , 0.39742527, 0.13204621, 0.33636354],
[ nan, 0.66996951, 0.52701823, 0.13850911,
0.52864583, 0.30510753, 0.19157609, 0.30921053],
[ nan, 0.73237388, 0.7759802 , 0.71933473,
0.75686481, 0.84366099, 0.90060636, 0.94346961],
[ 0.52668201, nan, nan, nan,
nan, nan, nan, nan],
[ nan, 4.2195122 , 4.125 , 2.40625 ,
5.31818182, 3.03225806, 4.86956522, 3. ]]), 'score_pair_matrix_mean': array([ 1.18790474])}

I have some question about you paper.

In you paper "MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment",the figure5 is only the hybrid model of Figure 3. The other two models are not drawn.
Is it correct for me to understand this?

Batchsize didn't change?

when I change batchsize,The program tells me that date reshape have some error.
then I use default batchsize,it can run.

I have sent you a message.

Why the NOT FOUND ERROR?

[] Initializing variables...
[
] Loading checkpoint...
Traceback (most recent call last):
File "main.py", line 123, in
main()
File "main.py", line 62, in main
gan.load_latest(CONFIG['exp']['pretrained_dir'])
File "/root/chy/musegan-master/musegan/model.py", line 150, in load_latest
checkpoint_path = tf.train.latest_checkpoint(checkpoint_dir)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1857, in latest_checkpoint
if file_io.get_matching_files(v2_path) or file_io.get_matching_files(
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 337, in get_matching_files
for single_filename in filename
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: /home/salu133445/NAS/salu133445/git/musegan/exp/musegan/lastfm_alternative_g_composer_d_proposed/checkpoints; No such file or directory

what is u use rock tag?

http://www.ifs.tuwien.ac.at/mir/msd/
I take the genre tag of LMP.But I don't konw you that Which kinds of music do you classify as rock?

In my oponion,rock include "Grunge Emo Metal" "Alternative" "Metal Death" "Metal Heavy" "Punk" "Reggae" "Rock Alternative" "Rock College" "Rock Contemporary" "Rock Hard" "Rock Neo Psychedelia"

All types are as follows:

Genre Name Number of Tracks
Big Band 3,115
Blues Contemporary 6,874
Country Traditional 11,164
Dance 15,114
Electronica 10,987
Experimental 12,139
Folk International 9,849
Gospel 6,974
Grunge Emo 6,256
Hip Hop Rap 16,100
Jazz Classic 10,024
Metal Alternative 14,009
Metal Death 9,851
Metal Heavy 10,784
Pop Contemporary 13,624
Pop Indie 18,138
Pop Latin 7,699
Punk 9,610
Reggae 5,232
RnB Soul 6,238
Rock Alternative 12,717
Rock College 16,575
Rock Contemporary 16,530
Rock Hard 13,276
Rock Neo Psychedelia 11,057

What do you mean by "time-dependent random vectors" in the paper?

Hi, Wendong,

I'm quite interested in your work of automatic music generation.
And our lab in NUS is also conducting research on music computing.

Basically, I understand the framework, I'm just a little bit confused about
the term "time-dependent random vectors"...

It sounds to me like a random vector you generated with time as a parameter,
but can I ask for some details or basic ideas about how to generate time-dependent random numbers?
Or you can simply point out from which source file can I find the code of this process.

Because the long-term structure is really important in music generation, I think time-dependent
random vectors may highly related to it.

Look forward to hearing from you.
Thanks a lot!

Xichu

How to listen the result?

The result of model are .npy and image files.

I can not find any audio files such as .mp3 and .mid.

So how to listen the result audio files?

file missing in discriminator folder

ImportError: No module named discriminator.proposed
As raised by someone previously, this issue is actually due to the fact that "init.py" is missing from "REPO_DIR/musegan/musegan/presets/discriminator" directory.
Please add that file to help others in avoiding the error :)

need of pretrained directory to run in windows

Hi,
I have trained the model for several epochs, yet the resullts are not satisfying, so can you share your pretrained directory, as I am a windows user and I am not able to run the bash file for downloading the pretrained directory.
Thanks.

track-conditional generation

hello, I'm quite interested in your work.
I'm confused about track condition generation. My understanding is that the bars that are produced now are determined by the previous bars. But, i didn't find the relevant code. I also confused about config.acc_idx, What does it mean? Looking forward to your reply. thx
image

the loss is negative

---exps/temporal_hybrid_GPU_1--- epoch: 0 | batch: 1/ 785 | time: 36.23
D loss: 15880.42, G loss: -241.65

I train your dataset use the main.py code,but the loss ins negative, anything is wrong?

Issue of Pretrained Model

Hi,
I want to use pretrained model to reproduce the result, but when I load the pretrained model,
this error happened: NotFoundError: Key GAN/G/bar_main_0/Layer_1/batch_norm/beta not found in checkpoint. Is the pretrained model compatible with the latest version of code? If not, is the compatible version exist in this repository?

Error: no module named SharedArray

The full error is below:

Traceback (most recent call last):
File "main.py", line 8, in
'.'.join(('musegan', CONFIG['exp']['model'], 'models')))
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "C:\Users\PU40005904\Downloads\musegan-master\musegan-master\musegan\musegan\models.py", line 9, in
from musegan.utils.metrics import Metrics
File "C:\Users\PU40005904\Downloads\musegan-master\musegan-master\musegan\utils\metrics.py", line 7, in
import SharedArray as sa
ModuleNotFoundError: No module named 'SharedArray'

SharedArray is not being installed when pip install sharedarray is ran, error is:

Failed building wheel for sharedarray
Running setup.py clean for sharedarray
Failed to build sharedarray
Installing collected packages: sharedarray
Running setup.py install for sharedarray ... error
Complete output from command "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\PU40005904\AppData\Local\Temp\pip-install-6nedy54o\sharedarray\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\PU40005904\AppData\Local\Temp\pip-record-n229w795\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'SharedArray' extension
creating build
creating build\temp.win-amd64-3.6
creating build\temp.win-amd64-3.6\Release
creating build\temp.win-amd64-3.6\Release\src
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD "-Ic:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages\numpy\core\include" "-Ic:\program files (x86)\microsoft visual studio\shared\anaconda3_64\include" "-Ic:\program files (x86)\microsoft visual studio\shared\anaconda3_64\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\cppwinrt" /Tc.\src\map_owner.c /Fobuild\temp.win-amd64-3.6\Release.\src\map_owner.obj
map_owner.c
.\src\map_owner.c(19): fatal error C1083: Cannot open include file: 'sys/mman.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\bin\HostX86\x64\cl.exe' failed with exit status 2

----------------------------------------

Command ""c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\PU40005904\AppData\Local\Temp\pip-install-6nedy54o\sharedarray\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\PU40005904\AppData\Local\Temp\pip-record-n229w795\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\PU40005904\AppData\Local\Temp\pip-install-6nedy54o\sharedarray\

If someone has a way to make this work, please help.
thanks.

Why I train so long with a GTX 1080 GPU?

Hi, I have trained the musegan hybrid model with a GTX 1080 GPU for a month, but have not finished yet, it iterated 6 epoch so far, which is very different from what you described in the article. Is it something wrong with my operation? In addition, the G_loss is negative?
like these:
6 | 68/ 429 | 833.89 | 64.655495 | -16.210884
6 | 69/ 429 | 837.64 | 49.892788 | -1.440047
6 | 70/ 429 | 852.82 | 56.134148 | -18.617529
6 | 71/ 429 | 847.19 | 53.495033 | -15.782113
6 | 72/ 429 | 845.77 | 48.307137 | 4.247076

Evaluation model

Hi,
I have trained the model and now I just want to run the evaluation again and again to generate new music.
I have seen that even when I enter the pretrained directory, it is training the model again and then producing the results.
Is there a way by which only the evaluation can be run and a new music is produced based on the model it has learned in the pretrained directory. i.e only the midi file comes as output and training is not proceeded.

Thanks.

Confidence score

Can somebody tell me which field in meta info is treated as ‘confidence score’ in this work (footnote 7 in the paper), I can see some fields like bars_confidence/beats_confidence/sections_confidence in meta info from LMD, but not sure which one you are using.

About pretrained model

  1. About the pretrained model
    I use the pretrained model lastfm_alternative_g_hybrid_d_proposed, but the structure seems to be different from the code structure so that the pretrained model cannot be used.

Caused` by op 'GAN/save/RestoreV2', defined at:
File "main.py", line 125, in
main()
File "main.py", line 57, in main
gan = MODELS.GAN(sess, CONFIG['model'])
File "/home/nina.cheng/musegan/musegan/musegan/models.py", line 19, in init
self.build()
File "/home/nina.cheng/musegan/musegan/musegan/models.py", line 92, in build
self.saver = tf.train.Saver()
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1284, in init
self.build()
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1296, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1333, in _build
build_save=build_save, build_restore=build_restore)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 781, in _build_internal
restore_sequentially, reshape)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 400, in _AddRestoreOps
restore_sequentially)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "/home/nina.cheng/.pyenv/versions/py3_tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

NotFoundError (see above for traceback): Key GAN/G/bar_main_0/Layer_1/batch_norm/beta not found in checkpoint
[[Node: GAN/save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_GAN/save/Const_0_0, GAN/save/RestoreV2/tensor_names, GAN/save/RestoreV2/shape_and_slices)]]
[[Node: GAN/save/RestoreV2/_169 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_170_GAN/save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

  1. About training warning
    When I training the model, it has some warning like this:

/home/nina.cheng/musegan/musegan/utils/metrics.py:92: RuntimeWarning: invalid value encountered in true_divide
chroma1 = chroma1 / np.sum(chroma1)
/home/nina.cheng/musegan/musegan/utils/metrics.py:94: RuntimeWarning: invalid value encountered in true_divide
chroma2 = chroma2 / np.sum(chroma2)
/home/nina.cheng/musegan/musegan/utils/metrics.py:92: RuntimeWarning: invalid value encountered in true_divide
chroma1 = chroma1 / np.sum(chroma1)
/home/nina.cheng/musegan/musegan/utils/metrics.py:94: RuntimeWarning: invalid value encountered in true_divide
chroma2 = chroma2 / np.sum(chroma2)
What does it means? Does it matters?

ValueError: `pianorolls` and `program_nums` must have the samelength

I am trying to train lastfm_alternative_5b_phrase.npy as my dataset, and the config.py is all set to the right value. However, i got this ValueError while execute the main.py, anyone can help me ? Thanks

Traceback (most recent call last):
File "main.py", line 124, in
main()
File "main.py", line 66, in main
gan.train(x_train, CONFIG['train'])
File "/home/allenpeng0209/musegan/musegan/musegan/models.py", line 113, in train
self.save_samples('x_train', x_train, save_midi=True)
File "/home/allenpeng0209/musegan/musegan/model.py", line 171, in save_samples
midi_io.save_midi(midipath, binarized, self.config)
File "/home/allenpeng0209/musegan/musegan/utils/midi_io.py", line 84, in save_midi
tempo=config['tempo'])
File "/home/allenpeng0209/musegan/musegan/utils/midi_io.py", line 36, in write_midi
raise ValueError("pianorolls and program_nums must have the same"
ValueError: pianorolls and program_nums must have the samelength

Best way to generate track conditioned output

Given all the different versions and sets of training data, what do you see as the fastest way to get a trained model running that supports track conditioning? This could include training a model, or even implementing track conditioning into one of the newer versions. Whatever you think might be fastest. Thanks!

FileNotFoundError: [Errno 2] No such file or directory

I found the project very interesting. I want to run to learn. I tried to run main.py but I came across the error: FileNotFoundError: [Errno 2] No such file or directory: 'lastfm_alternative_8b_phrase'.

I downloaded this lastfm_alternative_8b_phrase.npy file.
I put it in the training_data folder and also inside the musegan folder. Then I tried to enter the full path of the file.
I also removed the .npy extension but it still did not work. what am I doing wrong?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.