-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于多张GPU微调大模型的问题 #433
Comments
Please check the code here |
我使用了accelerate进行两张3090的demo,但是仍然报错File "/home/ubuntu/data/syh/C4MMD-main/C4MMDmain/CoT_module.py", line 210, in Process finished with exit code 1 |
Can u provide more details about your training script? For example, you may run the finetune_lora.sh and modify the GPUS_PER_NODE == 2. |
请问如果我想使用旧版本的“internlm-xcomposer-7b”进行两张3090进行微调,应该如何修改代码。我发现最新的多卡运行代码无法适用旧版本的模型
The text was updated successfully, but these errors were encountered: