Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch) #8

Open
OmarHedeya95 opened this issue Mar 13, 2021 · 2 comments

Comments

@OmarHedeya95
Copy link

Hello,
I always get the following error when running the demo code multiple times even if I am using a very tiny image. I think maybe the cache is not emptied or something? I am not really sure and would appreciate your help. Thank you

Error Message:

Traceback (most recent call last):
File "demo.py", line 127, in
lhpy = loss_hpy(HPy,HPy_target)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 88, in forward
return F.l1_loss(input, target, reduction=self.reduction)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/functional.py", line 2191, in l1_loss
ret = torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch)

@DmitrySavchuk
Copy link

Same issue

@hansenmaster
Copy link

I had the same problem and resolved it by downsize the resolution of target image. The example image size is only <500x500 pixels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants