Code Monkey home page Code Monkey logo

xphonebert's Introduction

XPhoneBERT : A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech

XPhoneBERT is the first pre-trained multilingual model for phoneme representations for text-to-speech(TTS). XPhoneBERT has the same model architecture as BERT-base, trained using the RoBERTa pre-training approach on 330M phoneme-level sentences from nearly 100 languages and locales. Experimental results show that employing XPhoneBERT as an input phoneme encoder significantly boosts the performance of a strong neural TTS model in terms of naturalness and prosody and also helps produce fairly high-quality speech with limited training data.

The general architecture and experimental results of XPhoneBERT can be found in our INTERSPEECH 2023 paper:

@inproceedings{xphonebert,
title     = {{XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech}},
author    = {Linh The Nguyen and Thinh Pham and Dat Quoc Nguyen},
booktitle = {Proceedings of the 24th Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year      = {2023}
}

Please CITE our paper when XPhoneBERT is used to help produce published results or is incorporated into other software.

Using XPhoneBERT with transformers

Installation

  • Install transformers with pip: pip install transformers, or install transformers from source.

  • Install text2phonemesequence: pip install text2phonemesequence
    Our text2phonemesequence package is to convert text sequences into phoneme-level sequences, employed to construct our multilingual phoneme-level pre-training data. We build text2phonemesequence by incorporating the CharsiuG2P and the segments toolkits that perform text-to-phoneme conversion and phoneme segmentation, respectively.

  • Notes

    • Initializing text2phonemesequence for each language requires its corresponding ISO 639-3 code. The ISO 639-3 codes of supported languages are available at HERE.

    • text2phonemesequence takes a word-segmented sequence as input. And users might also perform text normalization on the word-segmented sequence before feeding into text2phonemesequence. When creating our pre-training data, we perform word and sentence segmentation on all text documents in each language by using the spaCy toolkit, except for Vietnamese where we employ the VnCoreNLP toolkit. We also use the text normalization component from the NVIDIA NeMo toolkit for English, German, Spanish and Chinese, and the Vinorm text normalization package for Vietnamese.

Pre-trained model

Model #params Arch. Max length Pre-training data
vinai/xphonebert-base 88M base 512 330M phoneme-level sentences from nearly 100 languages and locales

Example usage

from transformers import AutoModel, AutoTokenizer
from text2phonemesequence import Text2PhonemeSequence

# Load XPhoneBERT model and its tokenizer
xphonebert = AutoModel.from_pretrained("vinai/xphonebert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/xphonebert-base")

# Load Text2PhonemeSequence
# text2phone_model = Text2PhonemeSequence(language='eng-us', is_cuda=True)
text2phone_model = Text2PhonemeSequence(language='jpn', is_cuda=True)

# Input sequence that is already WORD-SEGMENTED (and text-normalized if applicable)
# sentence = "That is , it is a testing text ."  
sentence = "これ は 、 テスト テキスト です ."

input_phonemes = text2phone_model.infer_sentence(sentence)

input_ids = tokenizer(input_phonemes, return_tensors="pt")

with torch.no_grad():
    features = xphonebert(**input_ids)

License

MIT License

Copyright (c) 2023 VinAI Research

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

xphonebert's People

Contributors

datquocnguyen avatar thelinhbkhn2014 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xphonebert's Issues

About Bert

If I wanna train a phoneBert for prosody which can support both Chinese pinyin phoneme and English phoneme(phonetic symbol), what I should do?

it doesn't support phoneme segment for Japanese?

Hi, Thanks for great work and sharing your research.

I'm really exciting for applying your work for my TTS model.
In your paper, the proposed model support phoneme segment for distinguish other phoneme belonging to different word tokens.

But, When I run the code for Japanese, the output phoneme doesn't have phoneme segment

from transformers import AutoModel, AutoTokenizer
from text2phonemesequence import Text2PhonemeSequence

# Load XPhoneBERT model and its tokenizer
xphonebert = AutoModel.from_pretrained("vinai/xphonebert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/xphonebert-base")

# "これは、テストのだめのテキストです" means ->  This is texts for testing.
text2phone_model = Text2PhonemeSequence(language='jpn')
text2phone_model.infer_sentence("これは、テストのだめのテキストです")
# Output: 'k o ɾ e h a t e s ɯ t o n o d a m e n o t e k i s ɯ t o d e s ɯ'

is the result caused by wrong execution? or the model doesn't support phoneme segment?

punctuation missing

Hi, I am trying Xphonebert for Vietnamese TTS system and I find that Xphonebert is simply skip punctuation character when convert input sequence to phoneme sequence by using Text2PhonemeSequence library
For example:

import torch
from transformers import AutoModel, AutoTokenizer
from text2phonemesequence import Text2PhonemeSequence

# Load XPhoneBERT model and its tokenizer
xphonebert = AutoModel.from_pretrained("vinai/xphonebert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/xphonebert-base")

# Load Text2PhonemeSequence
text2phone_model = Text2PhonemeSequence(language="vie-n", is_cuda=False)

# Input sequence that is already word-segmented (and text-normalized if applicable)
sentence1 = "dù sao tiền cũng đã trả rồi, chờ xem phản ứng từ thị trường thế nào đã rồi nói tiếp."
sentence2 = "dù sao tiền cũng đã trả rồi. chờ xem phản ứng từ thị trường thế nào đã rồi nói tiếp."
input_phonemes1 = text2phone_model.infer_sentence(sentence1)
input_phonemes2 = text2phone_model.infer_sentence(sentence2)

The phoneme sequence of 2 input is the same:

z u ˧˨ ▁ s a w ˧˧ ▁ t i ə n ˧˨ ▁ k u ŋ͡m ˧ˀ˥ ▁ d a ˧ˀ˥ ▁ c a ˧˩˨ ▁ z o j ˧˨ ▁ c ɤ ˧˨ ▁ s ɛ m ˧˧ ▁ f a n ˧˩˨ ▁ ɯ ŋ ˨˦ ▁ t ɯ ˧˨ ▁  i ˨ˀ˩ ʔc ɯ ə ŋ ˧˨ ▁  e ˨˦ ▁ n a w ˧˨ ▁ d a ˧ˀ˥ ▁ z o j ˧˨ ▁ n ɔ j ˨˦ ▁ t i ə p ˦˥

This will raise an misunderstanding for model to learn break between sentence parts.
Pls check it out !!!
Thank you !!!

Finetune this model on a new language

Hello, thank you for providing the multi-language model. I am interested in fine-tuning the model on a new language based on your framework. What steps should I take?

AttributeError: 'DistributedDataParallel' object has no attribute 'enc_p'

I 'm using torch==1.12.1 and met the error in train.p at this block code:

for epoch in range(epoch_str, hps.train.epochs + 1):
        if epoch < int(hps.train.epochs / 4):
            for child in net_g.enc_p.bert.children():
                for param in child.parameters():
                    param.requires_grad = False
        else:
            for child in net_g.enc_p.bert.children():
                for param in child.parameters():
                    param.requires_grad = True

Please help me to fixed this and can you provide the version of python following with the update requirements.txt that should working. I currently using python 3.8.10
Many thanks!

Multispeaker

Hey I was curios if you have tried any methods for making multi-speaker VITS models with your encoder. Normal VITS seems to have a multi-speaker capability with this extra embedding layer for encoding speaker ID and providing that to various downstream parts (all the parts that take g)

  if self.n_speakers > 0:
    g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
  else:
    g = None

  z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
  z_p = self.flow(z, y_mask, g=g)

  with torch.no_grad():
    # negative cross-entropy
    s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
    neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
    neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
    neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
    neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
    neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4

    attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
    attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()

  w = attn.sum(2)
  if self.use_sdp:
    l_length = self.dp(x, x_mask, w, g=g)
    l_length = l_length / torch.sum(x_mask)
  else:
    logw_ = torch.log(w + 1e-6) * x_mask
    logw = self.dp(x, x_mask, g=g)
    l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging 

  # expand prior
  m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
  logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)

  z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
  o = self.dec(z_slice, g=g)
  return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)

would using your XPhoneBert encoder have much of an effect on this?

Gibberish output after 70k steps

Hello,
I tried training VITS with XPhoneBERT as in this repo using my own dataset. I processed the dataset (which is already normalized) as instructed in README by using spaCy to segment words, and preprocessed it with preprocess.py.
After 70k, it only produces some gibberish output like the following one,

1.mp4

What am I missing?
Thank you!

Any comments on the size required for the dataset? (Also, can you share the pretrained models)

Let's say I want to finetune/train base vits (ljspeech/libri tts) with XphoneBERT, presumably what shall be the size be the size of the dataset? Can a dataset with ~2000 clips (1 sec - 25 sec) work? Also, I would love to keep the sample rate ~44100. Any comments for the same?

Also, sharing a tts generated sample (result of finetuning tortoise-tts)

https://drive.google.com/file/d/1OfZnURiaAEW0O_B8RVozt1M4Uadl-rEV/view?usp=sharing

Any guide to make me achieve this level of tts quality would be appreciated.

Possible for you to share the pretrained VITS models (trained with XphoneBERT)?

Text Normalization Process

First of all, thanks for putting this up! Maybe I missed it somewhere but can you explain what the text normalization process should be for making new datasets. I know that there word segmentation and text normalization. For things like english i'm assuming that word segmentation just relies on spaces but the text-normalization I'm a bit lost. Here is my best guess, does it look correct?

import re
from num2words import num2words
from nltk.tokenize import word_tokenize

def normalize_text(text): #improvised because I have no idea what the real "normalization" is but this seems to match whats in the sample dataset
    # Convert to lowercase
    text = text.lower()

    # Convert numbers to words
    text = re.sub(r'\b\d+\b', lambda x: num2words(int(x.group())), text)

    # Replace opening and closing quotes
    text = re.sub(r"(\s)\"(\w)", lambda m: m.group(1) + "``" + m.group(2), text)
    text = re.sub(r"(\w)\"(\s)", lambda m: m.group(1) + "''" + m.group(2), text)

    # Tokenize text
    tokens = word_tokenize(text)

    # Join tokens back together with spaces in between
    normalized_text = ' '.join(tokens)

    return normalized_text

Dimension out of range error, have tried it with various versions of torch

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/jumpcloud/libraries/XPhoneBERT/VITS_with_XPhoneBERT/train.py", line 130, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
File "/home/jumpcloud/libraries/XPhoneBERT/VITS_with_XPhoneBERT/train.py", line 152, in train_and_evaluate
for batch_idx, (x, attention_mask, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader):
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/jumpcloud/miniconda3/envs/vits/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 61, in fetch
return self.collate_fn(data)
File "/home/jumpcloud/libraries/XPhoneBERT/VITS_with_XPhoneBERT/data_utils.py", line 115, in call
torch.LongTensor([x[2].size(1) for x in batch]),
File "/home/jumpcloud/libraries/XPhoneBERT/VITS_with_XPhoneBERT/data_utils.py", line 115, in
torch.LongTensor([x[2].size(1) for x in batch]),
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.