Code Monkey home page Code Monkey logo

note-seq's Introduction

Status

This repository is currently inactive and serves only as a supplement some of our papers. We have transitioned to using individual repositories for new projects. For our current work, see the Magenta website and Magenta GitHub Organization.

Magenta

Build Status PyPI version

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it's also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on this GitHub. If you’d like to learn more about Magenta, check out our blog, where we post technical details. You can also join our discussion group.

This is the home for our Python TensorFlow library. To use our models in the browser with TensorFlow.js, head to the Magenta.js repository.

Getting Started

Take a look at our colab notebooks for various models, including one on getting started. Magenta.js is also a good resource for models and demos that run in the browser. This and more, including blog posts and Ableton Live plugins, can be found at https://magenta.tensorflow.org.

Magenta Repo

Installation

Magenta maintains a pip package for easy installation. We recommend using Anaconda to install it, but it can work in any standard Python environment. We support Python 3 (>= 3.5). These instructions will assume you are using Anaconda.

Automated Install (w/ Anaconda)

If you are running Mac OS X or Ubuntu, you can try using our automated installation script. Just paste the following command into your terminal.

curl https://raw.githubusercontent.com/tensorflow/magenta/main/magenta/tools/magenta-install.sh > /tmp/magenta-install.sh
bash /tmp/magenta-install.sh

After the script completes, open a new terminal window so the environment variable changes take effect.

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

Manual Install (w/o Anaconda)

If the automated script fails for any reason, or you'd prefer to install by hand, do the following steps.

Install the Magenta pip package:

pip install magenta

NOTE: In order to install the rtmidi package that we depend on, you may need to install headers for some sound libraries. On Ubuntu Linux, this command should install the necessary packages:

sudo apt-get install build-essential libasound2-dev libjack-dev portaudio19-dev

On Fedora Linux, use

sudo dnf group install "C Development Tools and Libraries"
sudo dnf install SAASound-devel jack-audio-connection-kit-devel portaudio-devel

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Using Magenta

You can now train our various models and use them to generate music, audio, and images. You can find instructions for each of the models by exploring the models directory.

Development Environment

If you want to develop on Magenta, you'll need to set up the full Development Environment.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install the dependencies by changing to the base directory and executing the setup command:

pip install -e .

You can now edit the files and run scripts by calling Python as usual. For example, this is how you would run the melody_rnn_generate script from the base directory:

python magenta/models/melody_rnn/melody_rnn_generate --config=...

You can also install the (potentially modified) package with:

pip install .

Before creating a pull request, please also test your changes with:

pip install pytest-pylint
pytest

PIP Release

To build a new version for pip, bump the version and then run:

python setup.py test
python setup.py bdist_wheel --universal
twine upload dist/magenta-N.N.N-py2.py3-none-any.whl

note-seq's People

Contributors

adarob avatar cghawthorne avatar christhetree avatar cifkao avatar daphnei avatar davidprimor avatar derwaldi avatar douglaseck avatar dpspe avatar ekelsen avatar falaktheoptimist avatar hardmaru avatar hawkinsp avatar iansimon avatar icoxfog417 avatar jesseengel avatar jsawruk avatar karimnosseir avatar kmalta avatar miaout17 avatar notwaldorf avatar olaviinha avatar qlzh727 avatar ringw avatar sun51 avatar tetromino avatar vibertthio avatar vidavakil avatar yilei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

note-seq's Issues

Removing tempo changes?

Hi!

I have been using note_seq for quite a long time. Great work! I am very happy!

I went through the code, looking for a piece of functionality, but I did not find anything.

Is there a way to remove all tempo changes and set a note_sequence to a fixed qpm?

The only thing I saw, and that I use, is the functionality to split a note_sequences in two everytime a tempo change happens. This goes in the right direction, but there is still a long way to go.

Best,
Tristan

Permission denied opening temp file

Hi,

I'm just starting to play with onsets_frames_transcription model from drums. I am attempting to transcribe a single hit of a one-shot sample. While debugging, I can see a temporary file created in C:\Users\micha\AppData\Local\Temp (with zero bytes). However, the process cannot open that file. I'm running Python 3.8.3 under Windows 10. What am I doing wrong?

Thanks.

See below:

(base) C:\Users\micha>onsets_frames_transcription_transcribe --model_dir=E-GMD\e-gmd-v1.0.0\e-gmd_checkpoint --config=drums --load_audio_with_librosa "Tom Wave 3.wav"

2020-10-13 22:26:28.882284: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-10-13 22:26:28.882475: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
c:\programdata\anaconda3\lib\site-packages\librosa\util\decorators.py:9: NumbaDeprecationWarning: ←[1mAn import was requested from a module that has moved location.
Import requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.←[0m
from numba.decorators import jit as optional_jit
c:\programdata\anaconda3\lib\site-packages\librosa\util\decorators.py:9: NumbaDeprecationWarning: ←[1mAn import was requested from a module that has moved location.
Import of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.←[0m
from numba.decorators import jit as optional_jit
WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:AutoGraph could not transform <function preprocess_example at 0x000001FD33B21B80> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
W1013 22:26:33.223956 17772 ag_logging.py:146] AutoGraph could not transform <function preprocess_example at 0x000001FD33B21B80> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\data.py:134: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.

W1013 22:26:33.223956 17772 deprecation.py:317] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\data.py:134: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.

WARNING:tensorflow:AutoGraph could not transform <function input_tensors_to_model_input at 0x000001FD33B22430> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
W1013 22:26:33.302016 17772 ag_logging.py:146] AutoGraph could not transform <function input_tensors_to_model_input at 0x000001FD33B22430> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2020-10-13 22:26:33.343892: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-10-13 22:26:33.344098: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-10-13 22:26:33.356562: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-H0T5JPS
2020-10-13 22:26:33.369591: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-H0T5JPS
WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\data.py:656: DatasetV1.output_shapes (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.data.get_output_shapes(dataset).
W1013 22:26:34.879807 17772 deprecation.py:317] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\data.py:656: DatasetV1.output_shapes (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.data.get_output_shapes(dataset).
WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:87: The name tf.estimator.tpu.RunConfig is deprecated. Please use tf.compat.v1.estimator.tpu.RunConfig instead.

W1013 22:26:34.973538 17772 module_wrapper.py:136] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:87: The name tf.estimator.tpu.RunConfig is deprecated. Please use tf.compat.v1.estimator.tpu.RunConfig instead.

WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:88: The name tf.estimator.tpu.TPUConfig is deprecated. Please use tf.compat.v1.estimator.tpu.TPUConfig instead.

W1013 22:26:34.989150 17772 module_wrapper.py:136] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:88: The name tf.estimator.tpu.TPUConfig is deprecated. Please use tf.compat.v1.estimator.tpu.TPUConfig instead.

WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:99: The name tf.estimator.tpu.TPUEstimator is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimator instead.

W1013 22:26:35.004811 17772 module_wrapper.py:136] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\train_util.py:99: The name tf.estimator.tpu.TPUEstimator is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimator instead.

INFO:tensorflow:Using config: {'_model_dir': 'E-GMD\e-gmd-v1.0.0\e-gmd_checkpoint', '_tf_random_seed': None, '_save_summary_steps': 300, '_save_checkpoints_steps': 300, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': None, '_keep_checkpoint_every_n_hours': 1, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=300, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=2, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2, experimental_host_call_every_n_steps=1), '_cluster': None}
I1013 22:26:35.020538 17772 estimator.py:191] Using config: {'_model_dir': 'E-GMD\e-gmd-v1.0.0\e-gmd_checkpoint', '_tf_random_seed': None, '_save_summary_steps': 300, '_save_checkpoints_steps': 300, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': None, '_keep_checkpoint_every_n_hours': 1, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=300, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=2, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2, experimental_host_call_every_n_steps=1), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu False
I1013 22:26:35.020538 17772 tpu_context.py:217] _TPUContext: eval_on_tpu False
WARNING:tensorflow:From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py:102: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
W1013 22:26:35.036062 17772 deprecation.py:317] From c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py:102: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
2020-10-13 22:26:35.057591: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-13 22:26:35.073695: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1fd34d77e00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-13 22:26:35.073825: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
INFO:tensorflow:Starting transcription for Tom Wave 3.wav...
I1013 22:26:35.145374 17772 onsets_frames_transcription_transcribe.py:112] Starting transcription for Tom Wave 3.wav...
INFO:tensorflow:Processing file...
I1013 22:26:35.160952 17772 onsets_frames_transcription_transcribe.py:118] Processing file...

c:\programdata\anaconda3\lib\site-packages\librosa\core\audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn('PySoundFile failed. Trying audioread instead.')
Exception %s [Errno 13] Permission denied: 'C:\Users\micha\AppData\Local\Temp\tmprlo72l7m.wav'

Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\programdata\anaconda3\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\onsets_frames_transcription_transcribe.exe_main
.py", line 7, in
File "c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py", line 155, in console_entry_point
tf.app.run(main)
File "c:\programdata\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "c:\programdata\anaconda3\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "c:\programdata\anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py", line 150, in main
run(argv, config_map=configs.CONFIG_MAP, data_fn=data.provide_batch)
File "c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py", line 121, in run
create_example(filename, hparams.sample_rate,
File "c:\programdata\anaconda3\lib\site-packages\magenta\models\onsets_frames_transcription\onsets_frames_transcription_transcribe.py", line 73, in create_example
assert len(example_list) == 1
AssertionError

OOM crash on MIDI file where note timestamp invalid

crash.mid is attached.

josh@vek-x:~/tmp$ uname -a
Linux vek-x 5.4.0-59-generic #65-Ubuntu SMP Thu Dec 10 12:01:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
josh@vek-x:~/tmp$ python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> ^D
josh@vek-x:~/tmp$ midicsv dm-mid/crash/crash.mid 
0, 0, Header, 0, 1, 96
1, 0, Start_track
1, 0, Title_t, "\000"
1, 0, Time_signature, 4, 2, 36, 8
1, 0, Time_signature, 4, 2, 36, 8
1, 2147483647, Note_on_c, 0, 58, 100
1, 2147483647, Note_on_c, 0, 53, 100
1, 2147483647, Note_off_c, 0, 58, 64
1, 2147483647, Note_off_c, 0, 53, 64
1, 2147483647, End_track
0, 0, End_of_file
josh@vek-x:~/tmp$ pip3 show note-seq
Name: note-seq
Version: 0.0.2
Summary: Use machine learning to create art and music
Home-page: https://magenta.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2
Location: /usr/local/lib/python3.8/dist-packages
Requires: librosa, attrs, bokeh, absl-py, numpy, scipy, pretty-midi, numba, protobuf, intervaltree, IPython, pydub, pandas
Required-by: magenta
josh@vek-x:~/tmp$ pip3 show magenta
Name: magenta
Version: 2.1.3
Summary: Use machine learning to create art and music
Home-page: https://magenta.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2
Location: /usr/local/lib/python3.8/dist-packages
Requires: scikit-image, numba, tensorflow-probability, sk-video, matplotlib, mir-eval, dm-sonnet, absl-py, wheel, pygtrie, librosa, sox, dopamine-rl, pretty-midi, tensorflow, numpy, python-rtmidi, Pillow, scipy, tensor2tensor, tf-slim, six, mido, tensorflow-datasets, imageio, note-seq
Required-by: 
josh@vek-x:~/tmp$ convert_dir_to_note_sequences --input_dir=dm-mid/crash --output_file=magenta/dm.tfrecord 
2021-01-09 22:10:43.217451: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-01-09 22:10:43.217560: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
INFO:tensorflow:Converting files in 'dm-mid/crash/'.
I0109 22:10:46.468066 139996452947776 convert_dir_to_note_sequences.py:83] Converting files in 'dm-mid/crash/'.
INFO:tensorflow:0 files converted.
I0109 22:10:46.468371 139996452947776 convert_dir_to_note_sequences.py:88] 0 files converted.
Killed

Bazel build error when trying to run align_fine

I am trying to get the alignment tool working, but am getting the following error:

/data/master-thesis/src/001_dataset/alignment/BUILD.bazel:52:10: error loading package '@com_google_protobuf//': Label '@bazel_skylib//rules:common_settings.bzl' is invalid because 'rules' is not a package; perhaps you meant to put the colon here: '@bazel_skylib//:rules/common_settings.bzl'? and referenced by '//:align'

Could you please provide me some help?

Thank you in advance, you are doing a really great job with magenta and note-seq!

System Information:

  • align_fine code copied from Release v0.0.1
  • OS: Ubuntu 16.04
  • Python 3.7 using Anaconda w. magenta and note-seq package installed through pip
  • Bazel 3.5.0 installed using apt like described here

Full Output:

(ma) sebi@sebi-desktop:/data/master-thesis/src/001_dataset/alignment$ INPUT_DIR=../test/
(ma) sebi@sebi-desktop:/data/master-thesis/src/001_dataset/alignment$ OUTPUT_DIR=../test_fine_align/
(ma) sebi@sebi-desktop:/data/master-thesis/src/001_dataset/alignment$ bazel run :align_fine -- --input_dir "${INPUT_DIR}" --output_dir "${OUTPUT_DIR}"
Starting local Bazel server and connecting to it...
INFO: SHA256 (https://github.com/protocolbuffers/protobuf/archive/master.zip) = 0dd2b6f666dbe89f7e431655b7ed6b26a5e04198b72c985b725f552a0eb5f1e4
DEBUG: Rule 'com_google_protobuf' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "0dd2b6f666dbe89f7e431655b7ed6b26a5e04198b72c985b725f552a0eb5f1e4"
DEBUG: Repository com_google_protobuf instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
  /home/sebi/.cache/bazel/_bazel_sebi/20d58b7978468f7d4694d017731bad27/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
INFO: Repository eigen_repo instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
  /home/sebi/.cache/bazel/_bazel_sebi/20d58b7978468f7d4694d017731bad27/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
ERROR: /data/master-thesis/src/001_dataset/alignment/BUILD.bazel:52:10: error loading package '@com_google_protobuf//': Label '@bazel_skylib//rules:common_settings.bzl' is invalid because 'rules' is not a package; perhaps you meant to put the colon here: '@bazel_skylib//:rules/common_settings.bzl'? and referenced by '//:align'
ERROR: Analysis of target '//:align_fine' failed; build aborted: Analysis failed
INFO: Elapsed time: 8.804s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (19 packages loaded, 76 targets configured)
FAILED: Build did NOT complete successfully (19 packages loaded, 76 targets configured)

which bazel version should I use?

Hello, I tried to compile this code using the bazel as described in the readme document. I met the following errors:

ERROR: /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/rules_pkg/pkg/mappings.bzl:108:12: name 'json' is not defined
ERROR: /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/rules_pkg/pkg/mappings.bzl:224:22: name 'json' is not defined
ERROR: /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/rules_pkg/pkg/mappings.bzl:413:22: name 'json' is not defined
ERROR: /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/rules_pkg/pkg/mappings.bzl:473:22: name 'json' is not defined
INFO: Repository eigen_repo instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
  /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
ERROR: /home/data/wxk/other_dataset/test_align_note_seq/note-seq/note_seq/alignment/BUILD.bazel:45:17: error loading package '@com_google_protobuf//': in /home/wxk/.cache/bazel/_bazel_wxk/aead400c0d4e3cefd21e17e4d75c732c/external/rules_pkg/mappings.bzl: Extension 'pkg/mappings.bzl' has errors and referenced by '//:alignment_py_pb2'
ERROR: Analysis of target '//:align_fine' failed; build aborted: Analysis failed
INFO: Elapsed time: 3.517s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (19 packages loaded, 69 targets configured)
FAILED: Build did NOT complete successfully (19 packages loaded, 69 targets configured)

My system is CentOS 7. I tried bazel vesion of 1.2.1, 2.2.0, 3.2.0, 3.6.0, but all are failed. So I wonder if I use the wrong version of bazel or any other erros.
Thank you.

File PermissionError at Midi File output (Python3, Win7)

I'm getting the following error on a performance_rnn_generate run (on Magenta Python3 Win7):
Seems to occur when trying to output the generated Midi file. Don't know
why it tris to create a File in Temp before writing to the final destination.

File "d:\programme\anaconda3\envs\magenta\lib\site-packages\pretty_midi\pretty
_midi.py", line 1374, in write
mid.save(filename=filename)
File "d:\programme\anaconda3\envs\magenta\lib\site-packages\mido\midifiles\mid
ifiles.py", line 409, in save
with io.open(filename, 'wb') as file:
PermissionError: [Errno 13] Permission denied: 'C:\Users\Cv\AppData\Local
\Temp\tmpkh_zo2m3'

Even I run it in as admin

Applying `to_tensors()` after `stretch_note_sequence()` results in a non-modified data point

Hi, using note_seq.sequences_lib.stretch_note_sequence() results indeed in a stretched musical sequence when played with play_sequence(), however, after converting to tensors, the shape of the result is identical to the original sequence. Is that behavior normal?

Perhaps not modifying the tempo (unlike in

for tempo in stretched_sequence.tempos:
) is the expected behavior?

Thanks,

Array representation

How can I use note-seq to represent and convert polyphonic MIDI file as array/n-dim matrix?

Since monophonic melody can be represented as 1-D array of integers with quantization, I assume polyphonic ones can be represented as n-D array?

My aim is to obtain raw array of notes(integers) and train custom NN other than magenta.

I spent some time on source code and unofficial documentation @wtong98 provided here with no solution.

Importing note-seq changes PrettyMIDI behavior for corrupt MIDI files.

I recently came across this bug while processing LMD with my own scripts and note-seq chord inference.
Usually, when processing a corrupt MIDI file, pretty midi throws an exception. However, if note-seq is imported, no exception is thrown anymore.

Reproducible code:

from pretty_midi import PrettyMIDI
# import note_seq  # Uncomment this line to prevent exception from being thrown


pm = PrettyMIDI('d6174dfc21a1449dc423c58065f14173.mid')  # Exception should be thrown about too many ticks

This was on python 3.8 with a clean virtualenv environment and the latest version of pretty midi and note-seq.
Naturally this kind of behavior causes a lot of problems downstream if you are relying on exceptions to be thrown for skipping corrupt MIDI files.

Let me know if it's reproducible, maybe I'm missing something on my side.
GitHub doesn't let me attach MIDI files, but you can find the corrupt file in the Lakh MIDI dataset in the d folder.

MIDI End Of Track meta message always in Channel 1

When you import the following generated MIDI file in a DAW (I work with REAPER), the DAW asks whether to expand the 2 MIDI tracks to new session tracks. However, I am programming a single track with all notes in channel 10. AFAIU, this happens because the MIDI end of track meta message is automatically inserted in channel 1 by note_seq. Is there a way to manually insert the end of track meta message or manipulate it programmatically?

I have searched for NoteSequence creation in magenta and note-seq, but didn't managed to find anything related. I was also unlucky reading the note_seq/protobuf/music_pb2.py.
Using note-seq 0.0.3.

import note_seq
# Create a simple drum loop.
ns = note_seq.NoteSequence(ticks_per_quarter=note_seq.STANDARD_PPQ)
ns.tempos.add(qpm=120)
ns.notes.add( # Step 1: Kick
    instrument=9, program=0, is_drum=True, pitch=36, velocity=127, start_time=0, end_time=0.125)
ns.notes.add( # Step 2: Snare
    instrument=9, program=0, is_drum=True, pitch=38, velocity=127, start_time=0.5, end_time=0.625)
ns.notes.add( # Step 3: Kick
    instrument=9, program=0, is_drum=True, pitch=36, velocity=127, start_time=1, end_time=1.125)
ns.notes.add( # Step 4: Snare
    instrument=9, program=0, is_drum=True, pitch=38, velocity=127, start_time=1.5, end_time=1.625)
# Export MIDI file.
note_seq.sequence_proto_to_midi_file(ns, "simple.mid")

Screenshot 2021-05-13 at 12 46 55

Chords and key inference - Step quantization

Hello,

when you infer chords and keys and then quantize the note sequence. The chords are quantized, but the key signatures are not.

Would it be possible to implement that?

Best,
Tristan

midi_io.py told me that the Temp directory doesn't have permissions on Windows

got an error like this

Traceback (most recent call last):
  File "scripts/models/generate_midi.py", line 69, in <module>
    tf.app.run(main)
  File "C:\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "C:\Python36\lib\site-packages\absl\app.py", line 300, in run
    _run_main(main, args)
  File "C:\Python36\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "scripts/models/generate_midi.py", line 59, in main
    mg.run_with_flags(generator)
  File "C:\Python36\lib\site-packages\magenta\models\melody_rnn\melody_rnn_generate.py", line 214, in run_with_flags
    note_seq.sequence_proto_to_midi_file(generated_sequence, midi_path)
  File "C:\Python36\lib\site-packages\note_seq\midi_io.py", line 372, in sequence_proto_to_midi_file
    drop_events_n_seconds_after_last_note)
  File "C:\Python36\lib\site-packages\note_seq\midi_io.py", line 217, in note_sequence_to_midi_file
    copyfile(temp_file.name, output_file)
  File "C:\Python36\lib\shutil.py", line 120, in copyfile
    with open(src, 'rb') as fsrc:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\%USERNAME%\\AppData\\Local\\Temp\\tmpdgf5fwq_'

I tried to confirm

https://github.com/magenta/note-seq/blob/master/note_seq/midi_io.py#L211

with tempfile.NamedTemporaryFile() as temp_file:

with tempfile.NamedTemporaryFile(delete=False) as temp_file:

this works as you want it to work. didn't get an error.

music_pb2 reference bug in sequences_lib.py

In sequences_lib.py, there's a bug which I think is related to aliasing of the music_pb2/protobuf module.

The following (line 815) fails when the time_change is a TimeSignature in the latest version of note_seq shipped in the magenta repo:

if isinstance(time_change, music_pb2.NoteSequence.TimeSignature)

music_pb2 is currently imported this way:

from note_seq.protobuf import music_pb2

Even though this is the case, the only way I was able to get this to work locally was by changing line 815 to the following:

if isinstance(time_change, note_seq.protobuf.music_pb2.NoteSequence.TimeSignature)

Why `protobuf >= 4.21.2`?

Recently the dependency on protobuf has been bumped to >=4.21.2, but apparently without a related code change (at least not in the same commit). I'm wondering why this was necessary?

From what I can see this leads to massive problems with other libraries in the ecosystem. In particular:

  • tensorflow 2.9.1 itself has a dependency protobuf<3.20,>=3.9.2
  • tensorboard 2.9.1 has a dependency protobuf<3.20,>=3.9.2
  • tensorflowjs 3.19.0 has a dependency protobuf==3.20.0
  • tflite-support 0.4.1 has a dependency protobuf<4

Due to the note-seq's requirement of protobuf>=4.21.2 it basically becomes unusable in the tensorflow ecosystem. This doesn't seem to make sense, in particular if nothing obvious has changed in the code? Shouldn't it be much more liberal in the protobuf version to avoid this dependency hell?

NoteSequence not picklable

(Originally magenta/magenta#1733.)

When trying to pickle a NoteSequence (which happens e.g. when using multiprocessing), the following error comes up:

PicklingError: Can't pickle <class 'music_pb2.NoteSequence'>: import of module 'music_pb2' failed

It seems related to this (old) protobuf issue.

However, it seems that some protobuffers (e.g. tf.GraphDef) can be pickled, so it might be just a matter of using a more recent version of protoc or passing the right flag to it? Or maybe it's a namespace naming issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.