Code Monkey home page Code Monkey logo

Comments (29)

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024 5

Để hôm nào mình rảnh mình tổng hợp lại nha, do giờ mình đã chuyển sang research Voice Cloning bằng model khác rồi, không có dùng cái này nữa

from visv2tts.

kingkong135 avatar kingkong135 commented on September 2, 2024 2

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024 1

Cho mình tham khảo với!

Bạn thử cái này xem, mình mới cập nhật á, test VIVOS trước xong extend ra dataset của bạn nè
I have updated the pipeline for training: https://github.com/v-nhandt21/ViSV2TTS/blob/master/README.md
Try to test the pipeline first with VIVOS then config it to run with your data

Cho mình hỏi là phần tiền xử lý sử dụng vi2IPA_split có thể áp dụng cho thuật toán TTS: VITS, VITS2 được không bạn nhỉ ?

Được hết á bạn, vi2IPA thì nó convert raw text thành grapheme dạng IPA, ngoài ra bạn cũng có thể thử dạng ARPAbet

https://github.com/v-nhandt21/ViMFA/blob/main/phoneme_dict/viARPAbet.txt

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024 1

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Cảm ơn Bạn, it is working for me in ubuntu.

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024 1

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))

The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

I think that the data size of 200-1000 hours of audio is too much. In the past, I was able to clone a voice using 1 hour or less of the voice. BTW, I am currently testing this model and it is working quite well.

https://github.com/rhasspy/piper-phonemize

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024 1

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))
The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

Mình chưa, một phần do tài nguyên không phép, thường mình test với bộ dữ liệu dưới 25h. VỚi voice clone, mình nghĩ sử dụng càng ít dữ liệu nhưng độ chính xác vẫn cao thì tốt, 1 số model chỉ cần thời gian dưới 10 phút như RVC hoặc so-vits-svc (dĩ nhiên đầu vào là audio =))

Uhm, so-vits thì nó là voice conversion rồi á, là speech2speech :))

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024 1

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))
The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

Hello bro, How many training step for the convergence of your dataset to reach the quantity as in your demo file (vits/audio/sontung_clone2.wav)?

I get this audio at 1M iters: https://github.com/v-nhandt21/ViSV2TTS/blob/master/vits/audio/sontung_clone.wav

from visv2tts.

dugduy avatar dugduy commented on September 2, 2024

Để hôm nào mình rảnh mình tổng hợp lại nha, do giờ mình đã chuyển sang research Voice Cloning bằng model khác rồi, không có dùng cái này nữa

Bạn đang dùng model nào vậy? Cho mình tham khảo với!

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

Cho mình tham khảo với!

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024

Cho mình tham khảo với!

Bạn thử cái này xem, mình mới cập nhật á, test VIVOS trước xong extend ra dataset của bạn nè

I have updated the pipeline for training: https://github.com/v-nhandt21/ViSV2TTS/blob/master/README.md

Try to test the pipeline first with VIVOS then config it to run with your data

from visv2tts.

kingkong135 avatar kingkong135 commented on September 2, 2024

Cho mình tham khảo với!

Bạn thử cái này xem, mình mới cập nhật á, test VIVOS trước xong extend ra dataset của bạn nè

I have updated the pipeline for training: https://github.com/v-nhandt21/ViSV2TTS/blob/master/README.md

Try to test the pipeline first with VIVOS then config it to run with your data

Cho mình hỏi là phần tiền xử lý sử dụng vi2IPA_split có thể áp dụng cho thuật toán TTS: VITS, VITS2 được không bạn nhỉ ?

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

cảm ơn bạn.
it seems that I have to convert the utf16 to utf8

For Train Model - where is the train_ms.py file?
python train_ms.py -c configs/vivos.json -m vivos

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

this is a mistake here

cat vivos/test/prompts.txt > DATA/val.txt
cat vivos/test/prompts.txt > DATA/train.txt
cat vivos/train/prompts.txt >> DATA/train.txt

it should be text to val and train to train? Why are we putting the test into the training?

cat vivos/test/prompts.txt > DATA/val.txt
cat vivos/train/prompts.txt >> DATA/train.txt

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024

this is a mistake here

cat vivos/test/prompts.txt > DATA/val.txt cat vivos/test/prompts.txt > DATA/train.txt cat vivos/train/prompts.txt >> DATA/train.txt

it should be text to val and train to train? Why are we putting the test into the training?

cat vivos/test/prompts.txt > DATA/val.txt cat vivos/train/prompts.txt >> DATA/train.txt

No, I did it intentionally, I try to merge it:

  • val.txt = test set
  • train.txt = test set + train set

So that makes to train more data because the test in speech synthesis is not too important

P/S: the vivos is for checking source code only, we really need more data for this stuff

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

thanks a lot. I am fighting with this thing all the way i.e. running on windows 10 instead of linux

(viclone) C:\Users\aiwinsor\Documents\dev\ViSV2TTS>python app.py
Traceback (most recent call last):
File "app.py", line 84, in
object = VoiceClone("vits/logs/vivos/G_7700000.pth")
File "app.py", line 58, in init
_ = utils.load_checkpoint(checkpoint_path, self.net_g, None)
File "C:\Users\aiwinsor\Documents\dev\ViSV2TTS\vits\utils.py", line 19, in load_checkpoint
assert os.path.isfile(checkpoint_path)
AssertionError

(viclone) C:\Users\aiwinsor\Documents\dev\ViSV2TTS>

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024

File "app.py", line 84, in
object = VoiceClone("vits/logs/vivos/G_7700000.pth")

You can try to use the absolute path like "C:\Users\aiwinsor\Documents\dev\ViSV2TTS\vits\logs\vivos\G_7700000.pth"

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

I gave up on running the code in Windows 10 and am running on Ubuntu using

working with VIVOS
wget http://ailab.hcmus.edu.vn/assets/vivos.tar.gz
tar xzf vivos.tar.gz

I was able to run all the install environment without any issues

python Step1_data_processing.py. OK
python Step2_extract_feature.py OK

But I am getting the error here.

python train_ms.py -c configs/vivos.json -m vivos

Below is my errors

_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
Traceback (most recent call last):
File "train_ms.py", line 294, in
main()
File "train_ms.py", line 50, in main
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/aiwinsor/vits/train_ms.py", line 118, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
File "/home/aiwinsor/vits/train_ms.py", line 148, in train_and_evaluate
mel = spec_to_mel_torch(
File "/home/aiwinsor/vits/mel_processing.py", line 78, in spec_to_mel_torch
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
TypeError: mel() takes 0 positional arguments but 5 were given

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024

I gave up on running the code in Windows 10 and am running on Ubuntu using

working with VIVOS wget http://ailab.hcmus.edu.vn/assets/vivos.tar.gz tar xzf vivos.tar.gz

I was able to run all the install environment without any issues

python Step1_data_processing.py. OK python Step2_extract_feature.py OK

But I am getting the error here.

python train_ms.py -c configs/vivos.json -m vivos

Below is my errors

_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.) return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] /home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.) return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] /home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.) return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] /home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.) return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] /home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/functional.py:606: UserWarning: stft will soon require the return_complex parameter be given for real inputs, and will further require that return_complex=True in a future PyTorch release. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:800.) return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] Traceback (most recent call last): File "train_ms.py", line 294, in main() File "train_ms.py", line 50, in main mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes while not context.join(): File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/aiwinsor/miniconda3/envs/viclone/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/home/aiwinsor/vits/train_ms.py", line 118, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "/home/aiwinsor/vits/train_ms.py", line 148, in train_and_evaluate mel = spec_to_mel_torch( File "/home/aiwinsor/vits/mel_processing.py", line 78, in spec_to_mel_torch mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) TypeError: mel() takes 0 positional arguments but 5 were given

I think this error may be caused by library version: https://librosa.org/doc/main/generated/librosa.filters.mel.html

My librosa version is librosa=0.8.0, could you try:

conda install librosa=0.8.0

or

python -m pip install librosa==0.8.0

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

Cảm ơn bạn nhiều. Mình bây giờ mới bắt đầu training. Xin hỏi bạn có xem qua https://github.com/Plachtaa/VITS-fast-fine-tuning

from visv2tts.

ppthanhtn avatar ppthanhtn commented on September 2, 2024

File "app.py", line 84, in object = VoiceClone("vits/logs/vivos/G_7700000.pth")

You can try to use the absolute path like "C:\Users\aiwinsor\Documents\dev\ViSV2TTS\vits\logs\vivos\G_7700000.pth"

Trong folder vits không thấy có folder logs nào vậy bạn?

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

File "app.py", line 84, in object = VoiceClone("vits/logs/vivos/G_7700000.pth")
You can try to use the absolute path like "C:\Users\aiwinsor\Documents\dev\ViSV2TTS\vits\logs\vivos\G_7700000.pth"

Trong folder vits không thấy có folder logs nào vậy bạn?

It is a big file; hence, it may be the reason why he did not upload it.

from visv2tts.

ppthanhtn avatar ppthanhtn commented on September 2, 2024

from visv2tts.

ppthanhtn avatar ppthanhtn commented on September 2, 2024

@kingkong135 hình như source code này không còn work nữa, bạn có thể cho mình xin cái working source của bạn được ko?

Cám ơn bạn!

from visv2tts.

kingkong135 avatar kingkong135 commented on September 2, 2024

@kingkong135 hình như source code này không còn work nữa, bạn có thể cho mình xin cái working source của bạn được ko?

Cám ơn bạn!

Mình vẫn chạy bình thường mà, có chăng là sửa trong cái file mel_preocessing.py 2 câu lệnh sau là do phiên bản python mình dùng.

   spec = torch.stft(y, n_fft= n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
                  center=center, pad_mode='reflect', normalized=False, `onesided=True)

    mel = librosa_mel_fn(sr=sampling_rate, n_fft = n_fft, n_mels= num_mels, fmin=fmin, fmax=fmax)

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

@kingkong135 hình như source code này không còn work nữa, bạn có thể cho mình xin cái working source của bạn được ko?

Cám ơn bạn!

follow the instructions here:
conda create -y -n viclone python=3.8
conda activate viclone
conda install cudatoolkit=11.3.1 cudnn=8.2.1

python -m pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
cd vits
python -m pip install -r requirements.txt

make sure that you downgrade librosa==0.8.0

You will need to downgrade gradio and httpx

from visv2tts.

UncleBob2 avatar UncleBob2 commented on September 2, 2024

Xin hỏi các bạn có dùng qua website này https://ttsmaker.com/. Nó có thể thay đổi - Voice Speed, Pitch Adjustment và v.v. Mình đang muốn làm cách software như vậy.

from visv2tts.

v-nhandt21 avatar v-nhandt21 commented on September 2, 2024

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))

The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

from visv2tts.

kingkong135 avatar kingkong135 commented on September 2, 2024

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))

The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

Mình chưa, một phần do tài nguyên không phép, thường mình test với bộ dữ liệu dưới 25h. VỚi voice clone, mình nghĩ sử dụng càng ít dữ liệu nhưng độ chính xác vẫn cao thì tốt, 1 số model chỉ cần thời gian dưới 10 phút như RVC hoặc so-vits-svc (dĩ nhiên đầu vào là audio =))

from visv2tts.

thanhlong1997 avatar thanhlong1997 commented on September 2, 2024

Mọi người có thể dùng model ở đây vivos_ViSV2TTS, mình train tới 150k step thấy nghe cũng ổn.

Have you tried to train on larger data :))

The data from VIVOS is for source code and env validation only, I think it would not be enough for the model to perform cloning. The data I used has size from 200-1000 hours of audio

Hello bro, How many training step for the convergence of your dataset to reach the quantity as in your demo file (vits/audio/sontung_clone2.wav)?

from visv2tts.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.