You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We trained model on GPU device with the argument -device cuda . Then we downloaded the trained model from GPU machine to CPU machine, because we want to do inference on CPU machine.
When we load the trained model from GPU, we encountered an error as follows RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
How could I fix this bug? Or can we have a way to train the model on the GPU and do inference on the CPU?
The text was updated successfully, but these errors were encountered:
Train on GPU and infer on CPU
We trained model on GPU device with the argument
-device cuda
. Then we downloaded the trained model from GPU machine to CPU machine, because we want to do inference on CPU machine.When we load the trained model from GPU, we encountered an error as follows
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
How could I fix this bug? Or can we have a way to train the model on the GPU and do inference on the CPU?
The text was updated successfully, but these errors were encountered: