-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt Tuning Crash with Llama-3.2 in torch.embedding #2161
Comments
Thanks for reporting this. With the information you've given, I could not not reproduce the error. This is what I tried: import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PromptTuningConfig, get_peft_model, TaskType, PromptTuningInit
model_id = "meta-llama/Llama-3.2-1B"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt_tuning_init_text = "Think carefully"
peft_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
num_virtual_tokens=len(tokenizer(prompt_tuning_init_text)["input_ids"]),
prompt_tuning_init_text=prompt_tuning_init_text,
tokenizer_name_or_path=model_id,
)
model = get_peft_model(model, peft_config)
sentence = "The quick brown fox jumps over the lazy dog."
sample_input = tokenizer(sentence, return_tensors="pt")
output = model(**sample_input) Could you please provide a minimal reproducer for the error? |
Can someone please edit the Issue template so that I don‘t get pinged anymore? I am not affiliated with this project. Thanks |
Saya, I'm very sorry about that. Your name is not on the issue template:
We have "sayakpaul" but I honestly don't know why people are constantly pinging you. I think it must have something to do with how GitHub outcompletes, so people type |
@BenjaminBossan I see. Thanks for the fast reply and sorry for not researching thoroughly. I will just ignore this repo as well in my notification settings, and I should be fine. |
System Info
peft==0.13.2
accelerate==1.0.1
torch==2.4.0
peft_config
Stack trace
Who can help?
@saya
Information
Tasks
examples
folderReproduction
Any kind of causal LM task tuning shows this issue
Expected behavior
Expected the training to happen
The text was updated successfully, but these errors were encountered: