diff --git a/intermediate_source/process_group_cpp_extension_tutorial.rst b/intermediate_source/process_group_cpp_extension_tutorial.rst index 47379bf881..3c72a9e319 100644 --- a/intermediate_source/process_group_cpp_extension_tutorial.rst +++ b/intermediate_source/process_group_cpp_extension_tutorial.rst @@ -25,9 +25,8 @@ Basics PyTorch collective communications power several widely adopted distributed training features, including -`DistributedDataParallel `__, -`ZeroRedundancyOptimizer `__, -`FullyShardedDataParallel `__. +`DistributedDataParallel `__ and +`ZeroRedundancyOptimizer `__. In order to make the same collective communication API work with different communication backends, the distributed package abstracts collective communication operations into a