Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU error #16

Open
rohitdahiya1 opened this issue Oct 20, 2023 · 4 comments
Open

GPU error #16

rohitdahiya1 opened this issue Oct 20, 2023 · 4 comments

Comments

@rohitdahiya1
Copy link

rohitdahiya1 commented Oct 20, 2023

When i am running sample.py file on google colab with T4 GPU. it is loading the model in GPU correctly, but when i am doing inference using inference_from_text it is showing below error,
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

i have tried many ways to bring both model and input text tensor on the same device, but it is giving me the same error again and again. but it was working fine when i used it for first few times. please help @GokulNC

@Instincts03
Copy link

IN TTS/utils/synthesizer.py , line 376- vocoder_device = "cpu" , change it to vocoder_device = "cuda"

@vrindamathur1428
Copy link

IN TTS/utils/synthesizer.py , line 376- vocoder_device = "cpu" , change it to vocoder_device = "cuda"

but if I'm using use_cuda=True then why do I need to do this ?

@punyabrota
Copy link

I am also having the same problem while running the sample.py file. changing the vcoder_device from "cpu" to "cuda" did not help either. any pointers or guidance please?

@jerrinhaloocom
Copy link

i am also trying to run it on GPU, in cpu no problem, if u get any idea or solutions, please comment here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants