Skip to content

Conversation

@vinhngx
Copy link

@vinhngx vinhngx commented Jul 29, 2019

This PR makes use of APEX (https://github.com/NVIDIA/apex) to provide automatic mixed precision training.

Automatic mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for much improved throughput.

Automatic mixed precision training can be enabled via an appropriate flag passed to the train script:

python train.py --apex=True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant