Skip to content

Commit 1d727b3

Browse files
author
flyhero99
committed
website update
1 parent 276f70c commit 1d727b3

20 files changed

+14856
-5
lines changed

.history/index_20231121005511.html

+776
Large diffs are not rendered by default.

.history/index_20231121005706.html

+776
Large diffs are not rendered by default.

.history/index_20231121005718.html

+776
Large diffs are not rendered by default.

.history/index_20231121005814.html

+776
Large diffs are not rendered by default.

.history/index_20231121005910.html

+776
Large diffs are not rendered by default.

.history/index_20231121005921.html

+776
Large diffs are not rendered by default.

.history/index_20231121005958.html

+776
Large diffs are not rendered by default.

.history/index_20231121010014.html

+778
Large diffs are not rendered by default.

.history/index_20231121010037.html

+778
Large diffs are not rendered by default.

.history/index_20231121010045.html

+778
Large diffs are not rendered by default.

.history/index_20231121010050.html

+778
Large diffs are not rendered by default.

.history/index_20231121010100.html

+779
Large diffs are not rendered by default.

.history/index_20231121010137.html

+779
Large diffs are not rendered by default.

.history/index_20231121010347.html

+784
Large diffs are not rendered by default.

.history/index_20231121010427.html

+786
Large diffs are not rendered by default.

.history/index_20231121010445.html

+786
Large diffs are not rendered by default.

.history/index_20231121010618.html

+796
Large diffs are not rendered by default.

.history/index_20231121010818.html

+796
Large diffs are not rendered by default.

.history/index_20231121010824.html

+788
Large diffs are not rendered by default.

index.html

+18-5
Original file line numberDiff line numberDiff line change
@@ -557,7 +557,7 @@ <h2 class="title is-3">Data Statistics</h2>
557557
<h2 class="title is-3">In-domain Evaluation</h2>
558558

559559
<div class="content has-text-justified">
560-
We first evaluate <b>TableLlama</b> on 8 in-domain test sets. Due to the special semi-structured nature of tables, for most table-based tasks, existing work achieves SOTA results by using pretraining on large-scale tables and/or special model architecture design tailored for tables. Surprisingly, <b>with a unified format and no extra special design, <b>TableLlama</b> can achieve comparable or even better performance on almost all the tasks</b>. The table below shows the results:
560+
We first evaluate <b>TableLlama</b> on 8 in-domain test sets. Due to the special semi-structured nature of tables, for most table-based tasks, existing work achieves SOTA results by using pretraining on large-scale tables and/or special model architecture design tailored for tables. Surprisingly, <b>with a unified format and no extra special design, <b>TableLlama</b> can achieve comparable or even better performance on almost all the tasks</b>. The table below shows the results:<br><br>
561561
<div id="myTable_wrapper" class="dataTables_wrapper no-footer">
562562
<table id="myTable" class="dataTable no-footer" role="grid">
563563
<thead>
@@ -623,11 +623,14 @@ <h2 class="title is-3">In-domain Evaluation</h2>
623623
</tbody>
624624
</table>
625625
<!-- Additional HTML for pagination and search, if needed -->
626-
Specifically, we observed these following takeaways:
626+
</div>
627+
<div>
628+
<br>
629+
Specifically, we observed these following takeaways:
627630
<ol>
628631
<li>By simply fine-tuning a large language model on TableInstruct, TableLlama can achieve comparable or even better performance on almost all the tasks <b>without any table pretraining or special table model architecture design</b>;</li>
629-
<li><b>TableLlama displays advantanges in table QA tasks</b>: <b>TableLlama</b> can surpass the SOTA by <b>5.61 points</b> for highlighted cell based table QA task (i.e., FeTaQA) and <b>17.71 points</b> for hierarchical table QA (i.e., HiTab), which is full of numerical reasoning on tables. As LLMs have shown superior in interacting with humans and answering questions, this indicates that <b>the existing underlying strong language understanding ability of LLMs may be beneficial for such table QA tasks, despite with semi-structured tables</b>.</li>
630-
<li><b>For the entity linking task</b>, which requires the model to link the mention in a table cell to the correct referent entity in Wikidata, <b>TableLlama</b> <b>also presents superior performance with 8 points gain over the SOTA performance</b>. Since the candidates are composed of their referent entity name and description, we hypothesize LLMs have certain abilities to understand the description which help identify the correct entities.</li>
632+
<li><b>TableLlama displays advantanges in table QA tasks</b>: <b>TableLlama</b> can surpass the SOTA by <b>5.61 points</b> for highlighted cell based table QA task (i.e., FeTaQA) and <b>17.71 points</b> for hierarchical table QA (i.e., HiTab), which is full of numerical reasoning on tables. As LLMs have shown superior in interacting with humans and answering questions, this indicates that <b>the existing underlying strong language understanding ability of LLMs may be beneficial for such table QA tasks, despite with semi-structured tables</b>;</li>
633+
<li><b>For the entity linking task</b>, which requires the model to link the mention in a table cell to the correct referent entity in Wikidata, <b>TableLlama</b> <b>also presents superior performance with 8 points gain over the SOTA performance</b>. Since the candidates are composed of their referent entity name and description, we hypothesize LLMs have certain abilities to understand the description which help identify the correct entities;</li>
631634
<li>Row population is the only task where <b>TableLlama</b> has a large performance gap compared to the SOTA. We observed that, <b>in order to correctly populate the entities from the given large number of candidates, the model needs to fully understand the inherent relation between the enquiried entity and each given candidate, which is still challenging for the current model</b>. Detailed analysis and case study can be found in our paper's <b>Section 4.1</b> and <b>Table 5 in Appendix A</b>.</li>
632635
</ol>
633636
</div>
@@ -646,7 +649,8 @@ <h2 class="title is-3">In-domain Evaluation</h2>
646649
<div class="column has-text-centered is-fifths-fifths">
647650
<h2 class="title is-3">Out-of-domain Evaluation</h2>
648651
<div class="content has-text-justified">
649-
To better understand how TableInstruct helps enhance model generalizability, we conduct an ablation study to show the transfer between individual datasets.
652+
To show the model's generalizability on unseen data and unseen tasks, we evaluate <b>TableLlama</b> on several out-of-domain datasets. <b>Overall, <b>TableLlama</b> shows a remarkable generalizability on different out-of-domain tasks, by outperforming the baselines from 6 to 48 absolute points</b>. The table below shows the results:
653+
<!-- To better understand how TableInstruct helps enhance model generalizability, we conduct an ablation study to show the transfer between individual datasets. -->
650654
<div id="myTable_wrapper" class="dataTables_wrapper no-footer">
651655
<table id="myTable" class="dataTable no-footer" role="grid">
652656
<thead>
@@ -713,6 +717,15 @@ <h2 class="title is-3">Out-of-domain Evaluation</h2>
713717
</table>
714718
<!-- Additional HTML for pagination and search, if needed -->
715719
</div>
720+
<div>
721+
<br>
722+
Specifically, we observed these following takeaways:
723+
<ol>
724+
<li><b>By learning from the table-based training tasks, the model has acquired essential underlying table understanding ability, which can be transferred to other table-based tasks/datasets and facilitate their performance;</b></li>
725+
<li>FEVEROUS exhibits the largest gain over the other 5 datasets. This is likely because the fact verification task is an in-domain training task, although the dataset is unseen during training. <b>Compared with cross-task generalization, it may be easier to generalize to different datasets belonging to the same tasks</b>;</li>
726+
<li>Although there's a gap between <b>TableLlama</b> results and SOTA performances, <b>those SOTAs were achieved under full-dataset training while TableLlama is zero-shot</b>. Nevertheless, we hope our work can inspire future work to further improve the zero-shot performance.</li>
727+
</ol>
728+
</div>
716729
</div>
717730
</div>
718731
</div>

0 commit comments

Comments
 (0)