Comments (12)
v2 96 with Integrated Fast Maximum Likelihood Sampling Scheme.I have not test so much. maybe CFM's advantage is speed.
from grad-svc.
v2 96 with Integrated Fast Maximum Likelihood Sampling Scheme.I have not test so much. maybe CFM's advantage is speed.
So you basically mean quality=v2 96 and speed=v3 CFM?
from grad-svc.
yes, CFM can use less steps.
from grad-svc.
yes, CFM can use less steps.
So Grad-SVC is better than so-vits-svc 5.0 and the best version of Grad-SVC is v2 96, right? What is the difference between Grad-SVC v2 and v3?
from grad-svc.
Different parameters have different effects, such as big model gets better result. Grad-SVC & so-vits-svc 5.0 are all just demo with small model for SVC, Their true abilities have not been tested.
Grad-SVC v2 uses Fast Maximum Likelihood Sampling Scheme from https://github.com/huawei-noah/Speech-Backbones/tree/main/DiffVC
Grad-SVC v3 uses CMF from https://github.com/shivammehta25/Matcha-TTS
from grad-svc.
Different parameters have different effects, such as big model gets better result. Grad-SVC & so-vits-svc 5.0 are all just demo with small model for SVC, Their true abilities have not been tested.
Grad-SVC v2 uses Fast Maximum Likelihood Sampling Scheme from https://github.com/huawei-noah/Speech-Backbones/tree/main/DiffVC
Grad-SVC v3 uses CMF from https://github.com/shivammehta25/Matcha-TTS
Do you have any plan to release or develop the best SVC model?
from grad-svc.
https://www.zhangxueyao.com/data/MultipleContentsSVC/index.html some one else may will do.
from grad-svc.
https://www.zhangxueyao.com/data/MultipleContentsSVC/index.html some one else may will do.
May I know how gvc.pretrained.pth is built?
from grad-svc.
train by the open source data:https://github.com/Multi-Singer/Multi-Singer.github.io
from grad-svc.
I'm having a real hard time matching your loss graph results.
I've tried fine tuning a single voice with about 1400 3-10 second samples. I've also tried fine tuning on 5 such voices at once. In both cases, the results continue to get better, but it took thousands of epochs and millions of iterations to get decent and the loss results are still not down to where you end up in the published charts. Bumping up the learning rate didn't appear to help much.
I've also tried training from scratch using the above dataset. At about 900,000 iterations, it's nowhere near what I see in your loss charts. The encoder melspectrograms are still more noise than melspectrogram. The other loses have also hardly come down.
Any thoughts?
from grad-svc.
Just to answer my own comment: it was normal not to see loss graphs like the ones I was looking at in the repo. The 3 main losses are not typical. Diffusion loss will generally quickly drop initially, but then can spend a very long time in the same range as every time step gets trained. Then the other three losses are adversarial, and so somewhat similar, though prior and Mel will slowly drop over time.
I played around a lot, but in the end I replaced the attention in the encoder with flash attention 2, swapped out the continuous diffusion model with diffusers, with a much larger unet and their schedulers and trained on 4k voices (about 500k+ clips) Also moved it to latent space with a VAE. Trains many, many times faster and can do inference in like 1-3 steps. Great project to play with, thanks!
from grad-svc.
Just to answer my own comment: it was normal not to see loss graphs like the ones I was looking at in the repo. The 3 main losses are not typical. Diffusion loss will generally quickly drop initially, but then can spend a very long time in the same range as every time step gets trained. Then the other three losses are adversarial, and so somewhat similar, though prior and Mel will slowly drop over time.
I played around a lot, but in the end I replaced the attention in the encoder with flash attention 2, swapped out the continuous diffusion model with diffusers, with a much larger unet and their schedulers and trained on 4k voices (about 500k+ clips) Also moved it to latent space with a VAE. Trains many, many times faster and can do inference in like 1-3 steps. Great project to play with, thanks!
Does the version you modified have a better quality? If it is, can you share the source code please?
from grad-svc.
Related Issues (17)
- A better alternative to Grad-TTS HOT 1
- 在推理阶段遇到了路径报错
- why skip_diff_train before fast_epochs HOT 7
- Something wrong with the decoder
- 训练数据量? HOT 1
- 电音现象问题请教 HOT 1
- num_worker Issue
- Speaker Encoder是如何训练出来的?
- What is the advantage for Grad-SVC, compare to So-VITS-SVC? HOT 3
- Does SVS work in english lyrics? HOT 2
- Regarding the error "Fail to allocate bitmap" during the training process HOT 4
- Error during training HOT 10
- Runtime error HOT 3
- training error HOT 5
- Setting base.yaml HOT 1
- Adjusting Hubert model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from grad-svc.