You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training a ResNet50 on ImageNet-1k using this script, it takes around 2 hours for one epoch and as I have to train for 90 epochs then it takes a lot of time to finish the training. I even tried to distribute it for 4 GPUs but still same results.
Pytorch version: 2.20
Operating System and version: Ubuntu 20.04
The text was updated successfully, but these errors were encountered:
Make sure you've got the latest version of CUDA and cuDNN installed along with the latest NVIDIA GPU drivers.
Install CUDA toolkit and make sure to match the CUDA version with the one supported by your PyTorch installation.
You can try Anaconda or Miniconda to manage your python environment as they help avoid conflicts with system packages.
Install PyTorch with GPU support using the appropriate version for your CUDA installation
If you're using multiple GPUs, consider installing NVIDIA NCCL (NVIDIA Collective Communication Library) for optimized GPU communication
Set the following environment variables in your training script to enable multi-GPU training
Execute your training script with the necessary commands to utilize multiple GPUs
Context
I am training a ResNet50 on ImageNet-1k using this script, it takes around 2 hours for one epoch and as I have to train for 90 epochs then it takes a lot of time to finish the training. I even tried to distribute it for 4 GPUs but still same results.
The text was updated successfully, but these errors were encountered: