-
Notifications
You must be signed in to change notification settings - Fork 22
Open
Description
Hi, this is really interesting and excellent work!
Here I have a question that when I used command 'sh run_scripts/sevila/inference/nextqa_infer.sh' on Next-qa dataset for 0-shot test, I found T5 model would genarate answers like '['Option A']'(using self.t5_tokenizer.batch_decode(outputs_qa.sequences, skip_special_tokens=True)). Thus pred_logits_qa = outputs_qa.scores[1] make sense.

I am curious why the T5 model can follow instructions so well and output options directly? But when I use the same prompts to ask questions to large language models, such as LLaVa, it output a piece of text instead of the options like ABCD. Is it because the pre-trained model is also trained on multiple choice questions?
Thanks!
Metadata
Metadata
Assignees
Labels
No labels