diff --git a/docs/en/tutorial/run.md b/docs/en/tutorial/run.md index c6f33bc2..ccb9ba5b 100644 --- a/docs/en/tutorial/run.md +++ b/docs/en/tutorial/run.md @@ -15,9 +15,6 @@ Select the job type as `PyTorch` and paste the command into the `Execution Comma -For RLHF, DPO, OnlineDPO, GRPO training task, you need set the advanced setting as `customPortList=30000-30050,createSvcForAllWorkers=true`. - - ## Non-PAI-DLC environment If you want to submit distributed training in a non-PAI-DLC environment, diff --git a/docs/en/tutorial/tutorial_llama2.md b/docs/en/tutorial/tutorial_llama2.md index 3432d58e..766735f9 100644 --- a/docs/en/tutorial/tutorial_llama2.md +++ b/docs/en/tutorial/tutorial_llama2.md @@ -221,7 +221,6 @@ In our training script, the resource requirements (assuming the resources are A1 For the environment variables and configurations required for distributed execution, please refer to [Distributed Execution](run.md). -Note that for RLHF tasks, if you are running on PAI DLC, you need to fill in the advanced configuration `customPortList=30000-30050,createSvcForAllWorkers=true`. ### Evaluation diff --git a/docs/zh/tutorial/run.md b/docs/zh/tutorial/run.md index 05c3da87..776bd917 100644 --- a/docs/zh/tutorial/run.md +++ b/docs/zh/tutorial/run.md @@ -13,8 +13,6 @@ ![image.png](../../images/dlc_2.jpg) -**对于 RLHF/DPO/OnlineDPO/GRPO 训练任务,您需要填写高级配置`customPortList=30000-30050,createSvcForAllWorkers=true`。** - ## 其他环境分布式执行 diff --git a/docs/zh/tutorial/tutorial_llama2.md b/docs/zh/tutorial/tutorial_llama2.md index 0aec7e68..bfb77c0e 100644 --- a/docs/zh/tutorial/tutorial_llama2.md +++ b/docs/zh/tutorial/tutorial_llama2.md @@ -212,7 +212,6 @@ bash scripts/train_grpo_math_llama.sh 3. llama2-70B RLHF: 4*8 GPU 分布式执行所需的环境变量和配置参考 [分布式执行](run.md)。 -**注意对于 RLHF 任务,如果在 PAI DLC 上运行,您需要填写高级配置`customPortList=30000-30050,createSvcForAllWorkers=true`。** ### 效果评估