Code Monkey home page Code Monkey logo

fg-transformer-tts's Introduction

LST-TTS

Official implementation for the paper Fine-grained style control in transformer-based text-to-speech synthesis. Submitted to ICASSP 2022. Audio samples/demo for our system can be accessed here

  • Mar. 5 2022: Fixed a inference bug of not passing the causal mask, quality of samples should be slightly better. (I have not updated the demos with this.)

Setting up submodules

git submodule update --init --recursive

Get the waveglow vocoder checkpoint from here (This is from the NVIDIA official WaveGlow repo).

Setup environment

See docker/Dockerfile for the packages need to be installed.

Dataset preprocessing

python preprocess_LJSpeech.py --datadir LJSpeechDir --outputdir OutputDir

Get the leading and trailing scilence marks from this repo, and put vctk-silences.0.92.txt in your VCTK dataset directory.

python preprocess_VCTK.py --datadir VCTKDir --outputdir Output_Train_Dir
python preprocess_VCTK.py --datadir VCTKDir --outputdir Output_Test_Dir --make_test_set
  • --make_test_set: specify this flag to process the speakers in the test set, otherwise only process training speakers.

Training

LJSpeech

python train_TTS.py --precision 16 \
                    --datadir FeatureDir \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH \
                    --sampledir SampleDir \
                    --batch_size 128 \
                    --check_val_every_n_epoch 50 \
                    --use_guided_attn \
                    --training_step 250000 \
                    --n_guided_steps 250000 \
                    --saving_path Output_CKPT_DIR \
                    --datatype LJSpeech \
                    [--distributed]
  • --distributed: enable DDP multi-GPU training
  • --batch_size: batch size per GPU, scale down if you train with multi-GPU and want to keep the same batch size
  • --check_val_every_n_epoch: sample and validate every n epoch
  • --datadir: output directory of the preprocess scripts

VCTK

python train_TTS.py --precision 16 \
                    --datadir FeatureDir \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH \
                    --sampledir SampleDir \
                    --batch_size 64 \
                    --check_val_every_n_epoch 50 \
                    --use_guided_attn \
                    --training_step 150000 \
                    --n_guided_steps 150000 \
                    --etts_checkpoint LJSpeech_Model_CKPT \
                    --saving_path Output_CKPT_DIR \
                    --datatype VCTK \
                    [--distributed]
  • --etts_checkpoint: the checkpoint path of pretrained model (on LJ Speech)

Synthesis

We provide examples for synthesis of the system in synthesis.py, you can adjust this script to your own usage. Example to run synthesis.py:

python synthesis.py --etts_checkpoint VCTK_Model_CKPT \
                    --sampledir SampleDir \
                    --datatype VCTK \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH

Pretrained checkpoints

We provide pretrained checkpoints on LJ Speech and VCTK. The model is a little large since it contains all the training and optimizer states.

fg-transformer-tts's People

Contributors

b04901014 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fg-transformer-tts's Issues

About Style embedding

Hi @b04901014 , thanks for your great implementation!

I have a question about code and paper. In the Style embedding section of the paper

Similarly, the output features from wav2vec 2.0 are also processed by a LSTM for fine-grained style embeddings. However, instead of taking the mean across the time dimension, we adopt average pooling with stride 4 and kernel size 8 to smooth out the representation. Based on this representation, each time steps will be fed as a query to a multi-head attention with another trainable codebook as key and value to produce a sequence of style embeddings

I think the output of LSTM should be pooled on average before going to MHA, but in the code the order is reversed

emotion_local, _ = self.emotion_local_prenet(emotion_local)
keys = torch.tanh(self.local_style_embedding.weight).unsqueeze(0).expand(emotion_local.size(0), -1, -1)
emotion_local = self.emotion_local_attn(emotion_local, keys, keys)[0]
stride, ksize = 4, 8
min_crops = 15
emotion_local = F.avg_pool1d(emotion_local.transpose(1, 2), kernel_size=ksize, stride=stride).permute(2, 0, 1)

What should we do to adjust the model on other language

Hi,
We are trying the single speaker instance. We had tried to train the model on LJSpeech, It seems the Local Style reference audio in deed effectively affect the prosody of the synthesized speech, but when we use BZNSYP, a Mandarin dataset, the result model have no ability to transfer speak style from reference audio to the synthesised one. and, the "model.synthesize_with_sample()" who use random data as LST, will just product chaos sound of the speaker, I am not sure is it because that the model's LST had speech content leakage in it. Then how to adjust the model parameter to be used in another language?
By the way. We are using the wav2vec2-LARGE instead of wav2vec2-BASE, and emo-dim = 1024

About the frame rate of preprocessing

I found that TransformerTTS is triained and verified on fr=22050 audios, while as wav2vec2.0 needed, the GST and LST are with input audio of fr=16000. why? Is it will be better when with the same frame rate? I see no explaination in the paper.

RuntimeError occurs when training

Hi, really nice work.

I tried to use "wav2vec2-large-xlsr-53" as encoder, then made a pretreatment.

but, It appeared during training
RuntimeError: mat1 and mat2 shapes cannot be multiplied

I wanna know if I can change the WAV2VEC2 encoder?
thank you.

Some questions

Hi author, nice implementation! I wonder do we need some pre-trained models for self.emo_model = MinimalClassifier()?
Besides, I get audio samples with speaker identity inconsistent with reference audio.

What is the resource to run this project

Thanks for the good job.
I am trying to reproducing the result on LJSpeech dataset. I have two GPU card with atmost 10G GPU memory left each. The training process run to epoch7 and crashed because of "Out of Memory"; I had tried to cut the batchsize to harf: 64, that makes no sense. I did not change the model hyper params to prevent the model degeneration. So what should I do to run it up?
Or is there a list of limitations about resources to run the training process up?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.