Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on out-of-domain datasets yields poor results? #11

Open
ruo-feng opened this issue Jul 29, 2024 · 1 comment
Open

Inference on out-of-domain datasets yields poor results? #11

ruo-feng opened this issue Jul 29, 2024 · 1 comment

Comments

@ruo-feng
Copy link

I downloaded TableLlama and tested it on the provided dataset. However, the model's performance on out-of-domain datasets is subpar. For example:

  1. On the FEVEROUS dataset, TableLlama consistently outputs refuted and entailed even though the instructions clearly state that the output should be not enough info supports or refutes.

  2. On the ToTTo dataset, where the input is in HTML format, TableLlama will generate outputs with strange format like
    2018\u201319 JGP Final Junior <col_header> Level </col_header> 1 126.26 <col_header> FS </col_header> 1 190.63 <col_header> Total </col_header>
    , which does not match the format of ground truth:
    At the 2018\u201319 JGP in junior-level, in the Final event, Mishina and Galliamov had a combined total of 190.63 points and a free program of 126.26 points.

Is there any preprocessing required before evaluating the model on these out-of-domain datasets?

@zhangtianshu
Copy link
Collaborator

For FEVEROUS, we map the 'refuted' to 'refutes' and map the 'entailed' to 'supports' for evaluation. But you are correct currently TableLlama can't handle 'not enough info' cases.

For ToTTo, the TableLlama's prediction should be as following if you correctly use the prompt:

    "instruction": "This is a highlighted cells description task. The goal of this task is to generate the language description given table cells.",
    "input_seg": "<page_title> Aleksandr Galliamov </page_title> <section_title> With Mishina </section_title> <table> <cell> 2018\u201319 JGP Final <col_header> Event </col_header> </cell> <cell> Junior <col_header> Level </col_header> </cell> <cell> 1 126.26 <col_header> FS </col_header> </cell> <cell> 1 190.63 <col_header> Total </col_header> </cell> </table>",
    "question": "Please generate one natural language description to describe the given highlighted table cells.",
    "output": "At the 2018\u201319 JGP in junior-level, in the Final event, Mishina and Galliamov had a combined total of 190.63 points and free program of 126.26 points.",
    "predict": "Aleksandr Galliamov won the Junior Grand Prix Final with a total score of 1 190.63.</s>"
  }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants