Skip to content

RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch) #8

Open
@OmarHedeya95

Description

@OmarHedeya95

Hello,
I always get the following error when running the demo code multiple times even if I am using a very tiny image. I think maybe the cache is not emptied or something? I am not really sure and would appreciate your help. Thank you

Error Message:

Traceback (most recent call last):
File "demo.py", line 127, in
lhpy = loss_hpy(HPy,HPy_target)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 88, in forward
return F.l1_loss(input, target, reduction=self.reduction)
File "/home/omar/anaconda3/envs/dlr/lib/python3.7/site-packages/torch/nn/functional.py", line 2191, in l1_loss
ret = torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 3.95 GiB total capacity; 708.35 MiB already allocated; 111.00 MiB free; 742.00 MiB reserved in total by PyTorch)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions