-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss calculation always 0 #207
Comments
Hi @sanipanwala, we don't provide support for fine-tuning in this repository. Which tools are you using for this? Are you sure they support the 34B model well? The exact same setting works for 7B and 13B? In any case, a loss of 0 at the start of training is a good indication that something's going wrong. |
@jgehring I mean I'm using "codellama/CodeLlama-34b-hf" model and running a normal Python script and yes same configuration works with 7B and 13B. Thanks. |
@sanipanwala I found the same problem when trying to peft fine-tune CodeLLama-7B (using LlamaForSequenceClassification), the Loss is always 0 during the fine-tuning. Thanks! |
Hi @sssszh , No, I haven't found any solution yet. Thanks, |
Hello,
I'm trying to fine-tune the 34B model but during fine-tuning, I always get a loss 0. While I was able to fine-tune 7B and 13B models but not 34B.
Let me know if I'm overlooking this or please give me suggestions.
Thanks.
The text was updated successfully, but these errors were encountered: