Comments (9)
You'll need to change set the example_secs
appropriately in the TFRecordProvider
for this to work.
For example, you can use a flag like --gin_param='TFRecordProvider.example_secs = 8'
or add that setting to your gin config file.
from ddsp.
Thanks Adarob, do you mean the use of of a fixed and same length for train and test files is a must for this experiment?
from ddsp.
I believe it is possible to use a different length at inference time. @jesseengel to confirm. As for training, a fixed length is required by the data pipeline, but could probably be relaxed if necessary.
from ddsp.
That's exactly my problem, I use fixed length (example 4 secs) during training. No problem in re-synthesizing train chunks of 4 seconds. Then I have a bunch of test files(from the same instrument) at various lengths. I would like to feed them to the autoencoder forward process (without running the feedback loop for training). This way I would like to check the analysis-synthesis quality of the trained network comparing audio files by listening to them (original versus resynthesized). Thanks
from ddsp.
This is possible at the moment, although not the most elegant.
For a single case we do this in the timbre_transfer demo, by just reseting the values with gin.
# Ensure dimensions and sampling rates are equal
time_steps_train = gin.query_parameter('DefaultPreprocessor.time_steps')
n_samples_train = gin.query_parameter('Additive.n_samples')
hop_size = int(n_samples_train / time_steps_train)
time_steps = int(audio.shape[1] / hop_size)
n_samples = time_steps * hop_size
gin_params = [
'Additive.n_samples = {}'.format(n_samples),
'FilteredNoise.n_samples = {}'.format(n_samples),
'DefaultPreprocessor.time_steps = {}'.format(time_steps),
]
with gin.unlock_config():
gin.parse_config(gin_params)
However I would recommend creating a new dataset out of your test examples (cut into overlapping fixed lengths) and then reassembling them after the fact to listen to the whole sample together. This is due to a known issue with long samples (10+ seconds) where float32 errors accumulate in the sinusoidal oscillators.
from ddsp.
Thanks Jesse. I thought using a long enough fixed length for both train and test could be a simpler solution (rather than rejoining segments, one could crop a longer).
So I was trying to have everything at 8 seconds length. I observed there exists another parameter window_secs set default as 4 in prepare_tfrecord_lib.py (line 110) and it seems a user set example_secs value (other than 4) would not be reflected there. Could there be a missing link between window_secs and example_secs?
Has anyone trained the auto_encoder with segment lengths other than 4 seconds?
from ddsp.
With 2 seconds it seems to be working fine. My training and test data were already sliced to 2 seconds though.
from ddsp.
@barisbozkurt , the examples_secs in prepare_tfrecord_lib.py should be set by the flag, so as long as you make the dataset with 8 second long examples --example_secs=8
and then change your gin parameters / file too you should be good to go.
https://github.com/magenta/ddsp/blob/master/ddsp/training/data_preparation/prepare_tfrecord.py#L79
from ddsp.
Thanks for the comments Adarob, Jesse and Voodoohop. I have not completely solved my problem but I guess that's something to be done on my side
from ddsp.
Related Issues (20)
- Possible to use VST model programmatically? HOT 1
- OnlineF0PowerPreprocessor cannot function with compute_power = False.
- No module crepe
- AttributeError: module 'hmmlearn.hmm' has no attribute 'CategoricalHMM'
- AttributeError: module 'hmmlearn.hmm' has no attribute 'CategoricalHMM'
- AttributeError: module 'hmmlearn.hmm' has no attribute 'CategoricalHMM'
- AttributeError: module 'collections' has no attribute 'Iterable'
- python environment Mac M1 HOT 1
- train_autoencoder.ipynb error I got HOT 1
- ImportError: cannot import name 'dtensor_api' from 'keras.dtensor' HOT 5
- vst notebook
- error when training !
- pip is repeatedly installing various versions of same packages HOT 9
- Question About Midi Autoencoder
- Failed building wheel for llvmlite, Could not build wheels for numba, llvmlite, which is required to install pyproject.toml-based projects HOT 1
- timbre_transfer.ipynb is broken on Colab
- train_autoencoder.ipynb is broken on Colab
- Installation Guide HOT 1
- pitch_detection.ipynb is broken in Colab
- VST3 file format no detected not working with fl studio
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ddsp.