diff --git a/.history/index_20231115215500.html b/.history/index_20231115215500.html new file mode 100644 index 0000000..22b8ca2 --- /dev/null +++ b/.history/index_20231115215500.html @@ -0,0 +1,608 @@ + + +
+ + + + + + + + + + + + + + + + + + + + + + + ++ Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves com- parable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 6-48 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We will open-source our dataset and trained model to boost future work on developing open generalist models for tables. +
+@misc{zhang2023tablellama,
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
+ year={2023},
+ eprint={2311.09206},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+
+ + Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves com- parable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 6-48 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We will open-source our dataset and trained model to boost future work on developing open generalist models for tables. +
+@misc{zhang2023tablellama,
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
+ year={2023},
+ eprint={2311.09206},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+
+ + Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves com- parable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 6-48 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We will open-source our dataset and trained model to boost future work on developing open generalist models for tables. +
+@misc{zhang2023tablellama,
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
+ year={2023},
+ eprint={2311.09206},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+
+ + Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves com- parable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 6-48 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We will open-source our dataset and trained model to boost future work on developing open generalist models for tables. +
+@misc{zhang2023tablellama,
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
+ year={2023},
+ eprint={2311.09206},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+
+ + Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves com- parable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 6-48 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We will open-source our dataset and trained model to boost future work on developing open generalist models for tables. +
+@misc{zhang2023tablellama,
+ title={TableLlama: Towards Open Large Generalist Models for Tables},
+ author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
+ year={2023},
+ eprint={2311.09206},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL}
+}
+
+