Code Monkey home page Code Monkey logo

vits's Introduction

20221108更新

模型和API均不再提供,大家可以使用我整理的数据集:点击这里

VITS 原神语音合成V2

本repo包含了我用于训练原神VITS模型对源代码做出的修改,以及新的config文件。

请注意,下方的API均已经停止运行。

感谢星尘以及国家超级计算广州中心提供的算力支持,感谢VITS模型作者Jaehyeon Kim, Jungil Kong, and Juhee Son,感谢ContentVEC作者 Kaizhi Qian. 本模型训练时使用的所有音频文件版权属于米哈游科技(上海)有限公司。

支持的说话者: ['派蒙', '凯亚', '安柏', '丽莎', '琴', '香菱', '枫原万叶', '迪卢克', '温迪', '可莉', '早柚', '托马', '芭芭拉', '优菈', '云堇', '钟离', '魈', '凝光', '雷电将军', '北斗', '甘雨', '七七', '刻晴', '神里绫华', '戴因斯雷布', '雷泽', '神里绫人', '罗莎莉亚', '阿贝多', '八重神子', '宵宫', '荒泷一斗', '九条裟罗', '夜兰', '珊瑚宫心海', '五郎', '散兵', '女士', '达达利亚', '莫娜', '班尼特', '申鹤', '行秋', '烟绯', '久岐忍', '辛焱', '砂糖', '胡桃', '重云', '菲谢尔', '诺艾尔', '迪奥娜', '鹿野院平藏']

Query String 参数:

参数 类型
text 字符串 生成的文本,支持常见标点符号。英文可能无法正常生成,数字请转换为对应的汉字再进行生成。
speaker 字符串 说话者名称。必须是上面的名称之一。
noise 浮点数 生成时使用的 noise_factor,可用于控制感情等变化程度。默认为0.667。
format 字符串 生成语音的格式,必须为mp3或者wav。默认为mp3。

示例:http://233366.proxy.nscc-gz.cn:8888/?text=你好&speaker=枫原万叶

VITS 原神语音合成V1

此外,也可以尝试使用公开的api:http://233366.proxy.nscc-gz.cn:8888/ 来进行尝试,此API可用于二创等用途,但禁止用于任何商业用途。 请注意多次生成的效果不会一致,可以多次尝试来选择一次较好的效果。 同时支持可视化合成:http://150.158.164.18:9069/ 感谢星尘以及国家超级计算广州中心提供的算力支持,感谢VITS模型作者Jaehyeon Kim, Jungil Kong, and Juhee Son,本模型训练时使用的所有音频文件版权属于米哈游科技(上海)有限公司。

Query String 参数:

参数 类型
text 字符串 生成的文本,支持常见标点符号。英文可能无法正常生成,数字请转换为对应的汉字再进行生成。
speaker 字符串 说话者名称。必须是上面的名称之一。
noise 浮点数 生成时使用的 noise_factor,可用于控制感情等变化程度。默认为0.667。
noisew 浮点数 生成时使用的 noise_factor_w,可用于控制音素发音长度变化程度。默认为0.8。
length 浮点数 生成时使用的 length_factor,可用于控制整体语速。默认为1.2。
format 字符串 生成语音的格式,必须为mp3或者wav。默认为mp3。

示例:http://233366.proxy.nscc-gz.cn:8888/?text=你好&speaker=派蒙

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

Jaehyeon Kim, Jungil Kong, and Juhee Son

In our recent paper, we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.

Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.

Visit our demo for audio samples.

We also provide the pretrained models.

** Update note: Thanks to Rishikesh (ऋषिकेश), our interactive TTS demo is now available on Colab Notebook.

VITS at training VITS at inference
VITS at training VITS at inference

Pre-requisites

  1. Python >= 3.6
  2. Clone this repository
  3. Install python requirements. Please refer requirements.txt
    1. You may need to install espeak first: apt-get install espeak
  4. Download datasets
    1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: ln -s /path/to/LJSpeech-1.1/wavs DUMMY1
    2. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2
  5. Build Monotonic Alignment Search and run preprocessing if you use your own datasets.
# Cython-version Monotonoic Alignment Search
cd monotonic_align
python setup.py build_ext --inplace

# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt 
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt

Training Exmaple

# LJ Speech
python train.py -c configs/ljs_base.json -m ljs_base

# VCTK
python train_ms.py -c configs/vctk_base.json -m vctk_base

Inference Example

See inference.ipynb

vits's People

Contributors

jaywalnut310 avatar jik876 avatar juheeuu avatar stardust-minus avatar w4123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vits's Issues

新版本计划

原神:

  • 空/荧语音
  • 更新新版本全人物模型

崩坏3:

  • 语音文件提取
  • 语音文字文件提取
  • 音频文字匹配
  • 训练

(有没有人借点算力用用)
有建议可以在下方提出

关于训练的问题

您好,我训练时使用chinese_cleaners2来处理中文文本,然后在训练时输出了以下的异常信息:

Traceback (most recent call last):
  File "/home/featurize/data/vits-main/train_ms.py", line 119, in run
    train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
  File "/home/featurize/data/vits-main/train_ms.py", line 147, in train_and_evaluate
    (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers)
  File "/home/featurize/data/vits-main/models.py", line 467, in forward
    z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
  File "/home/featurize/data/vits-main/models.py", line 237, in forward
    x = self.enc(x, x_mask, g=g)
  File "/environment/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/featurize/data/vits-main/modules.py", line 166, in forward
    n_channels_tensor)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "/home/featurize/data/vits-main/commons.py", line 103, in fused_add_tanh_sigmoid_multiply
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
  n_channels_int = n_channels[0]
  in_act = input_a + input_b
           ~~~~~~~~~~~~~~~~~ <--- HERE

在打印了input_a和input_b的维度后,发现input_a的形状是[64, 384, 500],input_b的形状是[64, 384, 1]这两种形状的张量是可以通过广播进行相加的,不存在不能够相加的问题。此外,有时候输出异常信息时会提示后验编码器的卷积层计算错误。
请问您遇到了这种情况吗,又是如何解决的呢。我认为这种问题应该跟多卡训练有关,但问题是我使用的只有一张卡,也就是创建的只有一个训练进程,按理来说不会发生数据冲突之类的问题。期待您的回复。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.