Skip to content
/ OntoTune Public

[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models

Notifications You must be signed in to change notification settings

zjukg/OntoTune

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OntoTune

In this work, we propose an ontology-driven self-training framework called OntoTune, which aims to align LLMs with ontology through in-context learning, enabling the generation of responses guided by the ontology.

🔔 News

📄arXiv🤗 Huggingface

  • 2025-01 OntoTune is accepted by WWW 2025 !
  • 2025-02 Our paper is released on arxiv !
  • 2025-06 Our model is released on huggingface !

🚀 How to start

git clone https://github.com/zjukg/OntoTune.git

The code of fine-tuning is constructed based on open-sourced repo LLaMA-Factory.

Dependencies

cd LLaMA-Factory
pip install -e ".[torch,metrics]"

Data Preparation

  1. The supervised instruction-tuned data generated by LLaMA3 8B for the LLM itself is placed in the link.
  2. Put the downloaded OntoTune_sft.json file under LLaMA-Factory/data/ directory.
  3. Evaluation datasets for hypernym discovery and medical question answering are in LLaMA-Factory/data/evaluation_HD and LLaMA-Factory/data/evaluation_QA, respectively.

Finetune LLaMA3

You need to add model_name_or_path parameter to yaml file。

cd LLaMA-Factory
llamafactory-cli train script/OntoTune_sft.yaml

🤝 Cite:

Please consider citing this paper if you find our work useful.


@inproceedings{DBLP:conf/www/LiuGWZBSC025,
  author       = {Zhiqiang Liu and
                  Chengtao Gan and
                  Junjie Wang and
                  Yichi Zhang and
                  Zhongpu Bo and
                  Mengshu Sun and
                  Huajun Chen and
                  Wen Zhang},
  editor       = {Guodong Long and
                  Michale Blumestein and
                  Yi Chang and
                  Liane Lewin{-}Eytan and
                  Zi Helen Huang and
                  Elad Yom{-}Tov},
  title        = {OntoTune: Ontology-Driven Self-training for Aligning Large Language
                  Models},
  booktitle    = {Proceedings of the {ACM} on Web Conference 2025, {WWW} 2025, Sydney,
                  NSW, Australia, 28 April 2025- 2 May 2025},
  pages        = {119--133},
  publisher    = {{ACM}},
  year         = {2025},
  url          = {https://doi.org/10.1145/3696410.3714816},
  doi          = {10.1145/3696410.3714816},
  timestamp    = {Wed, 23 Apr 2025 16:35:50 +0200},
  biburl       = {https://dblp.org/rec/conf/www/LiuGWZBSC025.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

About

[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages