Comments (15)
Got another error when using mixed precision:
Traceback (most recent call last): File "train_first.py", line 444, in <module> main() File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "train_first.py", line 305, in main optimizer.step('pitch_extractor') File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in step _ = [self._step(key, scaler) for key in keys] File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in <listcomp> _ = [self._step(key, scaler) for key in keys] File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 39, in _step self.optimizers[key].step() File "/root/miniconda3/lib/python3.8/site-packages/accelerate/optimizer.py", line 133, in step self.scaler.step(self.optimizer, closure) File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 336, in step assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer." AssertionError: No inf checks were recorded for this optimizer.
What seems to be the problem? Thanks!
I have traced this to
Line 305 in 3e30081
It does make me wonder, @yl4579 , if this is expected? Were the parameters of the pitch_extractor expected to get updated?
from styletts2.
I used 4 A100 with a batch size of 32 for the paper, and the checkpoint I shared was trained with 4 L40 with a batch size of 16. You can either decrease the batch size to 4 (as you only have one GPU), or you can decrease the max_len
, which now is equivalent to 5 seconds. You definitely don't need that long clip for training.
from styletts2.
Also for the first stage, you can try mixed precision, and it doesn't seem to decrease the reconstruction quality in my experience. It results in much faster training and half of the RAM use. All you need is accelerate launch --mixed_precision=fp16 train_first.py --config_path ./Configs/config.yml
. This is the stage that takes the most time anyway so using mixed precision is always good practice.
from styletts2.
Also for the first stage, you can try mixed precision, and it doesn't seem to decrease the reconstruction quality in my experience. It results in much faster training and half of the RAM use. All you need is
accelerate launch --mixed_precision=fp16 train_first.py --config_path ./Configs/config.yml
. This is the stage that takes the most time anyway so using mixed precision is always good practice.
Thanks! I'll give a try!
from styletts2.
Got another error when using mixed precision:
Traceback (most recent call last):
File "train_first.py", line 444, in <module>
main()
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "train_first.py", line 305, in main
optimizer.step('pitch_extractor')
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in step
_ = [self._step(key, scaler) for key in keys]
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in <listcomp>
_ = [self._step(key, scaler) for key in keys]
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 39, in _step
self.optimizers[key].step()
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/optimizer.py", line 133, in step
self.scaler.step(self.optimizer, closure)
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 336, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
What seems to be the problem? Thanks!
from styletts2.
@godspirit00 It says the loss is NaN because you are using mixed precision. I think you may have to change the loss here: https://github.com/yl4579/StyleTTS2/blob/main/losses.py#L22 with return F.l1_loss(y_mag, x_mag)
.
from styletts2.
The error persists after changing the line.
Traceback (most recent call last):
File "train_first.py", line 444, in <module>
main()
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/miniconda3/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "train_first.py", line 305, in main
optimizer.step('pitch_extractor')
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in step
_ = [self._step(key, scaler) for key in keys]
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 32, in <listcomp>
_ = [self._step(key, scaler) for key in keys]
File "/root/autodl-tmp/StyleTTS2/optimizers.py", line 39, in _step
self.optimizers[key].step()
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/optimizer.py", line 133, in step
self.scaler.step(self.optimizer, closure)
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 336, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
It was at Epoch 50.
from styletts2.
Could it be related to the discriminators, or is it a TMA issue? Can you set the TMA epoch to a higher number but set start_ds
in train_first.py to True and see if it is a problem of discriminators?
from styletts2.
Can you set the TMA epoch to a higher number but set
start_ds
in train_first.py to True
I tried searching for start_ds
in train_first.py
, but I can't find it.
from styletts2.
Sorry I mean you just modify the code to train the discriminator but not the aligner. But if it still doesn’t work, probably you have to do it without mixed precision. It is highly sensitive to batch size unfortunately, so it probably only works for large enough batches (like 16 or 32).
from styletts2.
@stevenhillis I think you are correct. The pitch_extractor actually shouldn't be updated. Does removing this line fix this problem of mixed precision?
from styletts2.
Good deal. Sure does!
from styletts2.
I have traced this to
Line 305 in 3e30081
I had the same 'inf' crash problem at epoch 50 with mixed precision and Batch=4. I can confirm that this allows the training to continue past 50.
from styletts2.
I used 4 A100 with a batch size of 32 for the paper, and the checkpoint I shared was trained with 4 L40 with a batch size of 16. You can either decrease the batch size to 4 (as you only have one GPU), or you can decrease the
max_len
, which now is equivalent to 5 seconds. You definitely don't need that long clip for training.
@yl4579 Thanks for your great job.
I used 4 V100 with a batch size of 8 and max len of 200 with my dataset. When I set the batch size to 16, out of memory.
So I should reduce max len?How to balance max len and batch size, and how should I adjust this parameter?
Thanks a lot.
from styletts2.
@Moonmore See #81 for a detailed discussion.
from styletts2.
Related Issues (20)
- Very high GPU memory usage in voice cloning after 10-15 runs. HOT 1
- Strange Loss Behavior During Stage Two Training - Not Decreasing after Diff Epoch HOT 2
- Finetune on ljspeech or libritts? HOT 1
- Better LJSpeech or LibriTTS for finetuning a single speaker voice? Or training from scratch with not so much data? HOT 3
- SLM Adversarial Training did not start when finetuning HOT 11
- Second stage training with smaller window size HOT 1
- Possible Bug in Style Diffusion Inference Code
- Issue with impropper pauses and random bursts of noise
- Cannot Convert float NaN to integer HOT 1
- HELP WANTED!!!!!!!!!!! HOT 3
- asr negative loss
- Resuming finetuning uses second to last epoch
- Help Wanted For Stage-1 HOT 2
- Inference with multilingual PL-BERT Model HOT 4
- During training, the graphics memory has been continuously increasing
- May be a bug? input parameters for model.predictor_encoder and model.style_encoder in train_finetune.py
- S_loss = 0 ... why? HOT 2
- Inference Error: context_features exists but no features provided HOT 1
- Speech conditioning like tortoise TTS HOT 1
- FP8 Fine Tuning Crashes HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from styletts2.