I have read issue 11, but still have some questions.
During testing, you utilize the forward function for testing
|
result = model.forward(**data, return_dict=True) |
But as far as I know, the forward function cannot perform the next token generation, resulting in struggling to output answer and [DET] token. Why don't you use the generate function? Is it possible that the inputs ids in your test contain the answer?
When will you make the training and testing datasets and dataloader public?
|
dataset = eval_dataloader.dataset |