-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU error #16
Comments
IN TTS/utils/synthesizer.py , line 376- vocoder_device = "cpu" , change it to vocoder_device = "cuda" |
but if I'm using use_cuda=True then why do I need to do this ? |
I am also having the same problem while running the sample.py file. changing the vcoder_device from "cpu" to "cuda" did not help either. any pointers or guidance please? |
i am also trying to run it on GPU, in cpu no problem, if u get any idea or solutions, please comment here |
When i am running sample.py file on google colab with T4 GPU. it is loading the model in GPU correctly, but when i am doing inference using inference_from_text it is showing below error,
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
i have tried many ways to bring both model and input text tensor on the same device, but it is giving me the same error again and again. but it was working fine when i used it for first few times. please help @GokulNC
The text was updated successfully, but these errors were encountered: