Open
Description
Hi,
I've been attempting to reproduce an experiment involving the finetuning of the Llama-2-7b-hf model, specifically using a random 5% of training data, using open-instruct finetune_with_accelerate.sh. I adhered to the hyperparameters outlined in your paper:
learning_rate = 2e-5
total_batch_size = 128
warmup_ratio = 0.03
lr_scheduler_type = linear
weight_decay = 0.0
num_train_epochs = 4
Despite following these settings, the performance of my model on the MMLU benchmark is significantly worse than yours as shown in the screenshot. Is this discrepancy in results anticipated? The gap in performance seems larger than what one might reasonably expect.
Could you please confirm if my hyperparameters are fully aligned with those used in your setup? Additionally, any details about your SFT hyperparameters would be greatly appreciated.
Thank you for your assistance.
Metadata
Metadata
Assignees
Labels
No labels