You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The output text is different on ROCm GPU. It should have been the same.
Code
defto_gpu(x):
returnx#.to("cuda:0") # Uncomment to testtokenizer=AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-large-seq2seq")
model=to_gpu(AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-large-seq2seq"))
defgenerate(instruction, knowledge, dialog):
ifknowledge!='':
knowledge='[KNOWLEDGE] '+knowledgedialog=' EOS '.join(dialog)
query=f"{instruction} [CONTEXT] {dialog}{knowledge}"input_ids=to_gpu(tokenizer(f"{query}", return_tensors="pt")).input_idsoutputs=model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True)
output=tokenizer.decode(outputs[0], skip_special_tokens=True)
returnoutputinstruction=f'Instruction: given a dialog context, you need to response empathically.'# Leave the knowldge emptyknowledge=''dialog= [
'Does money buy happiness?',
'It is a question. Money buys you a lot of things, but not enough to buy happiness.',
'What is the best way to buy happiness ?'
]
# dialog = ["Hey my name is Thomas! How are you?"] # Uncomment to testresponse=generate(instruction, knowledge, dialog)
print(response)
The output text is different on ROCm GPU. It should have been the same.
Code
requirements.txt
Output
The text was updated successfully, but these errors were encountered: