-
Notifications
You must be signed in to change notification settings - Fork 66
Open
Description
I am getting this error
RuntimeError: CUDA out of memory. Tried to allocate 648.00 MiB (GPU 0; 3.82 GiB total capacity; 1.57 GiB already allocated; 665.06 MiB free; 1.98 GiB reserved in total by PyTorch)
My hardware is Geforce GTX1650 4GB dedicated memory
I tried this on google colab too.
I already tried :
Batch size from 128 to 2
Cleared GPU cache.
I think it need to be fixed from code. Some variables unnecessarily taking too much space. Please help me.
Thanks in advance
Metadata
Metadata
Assignees
Labels
No labels