Skip to content

The official repository for our ACL 2024 paper: Are LLM-based Evaluators Confusing NLG Quality Criteria?

License

Notifications You must be signed in to change notification settings

PKU-ONELab/LLM-evaluator-reliability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Are LLM-based Evaluators Confusing NLG Quality Criteria?

This is the official repository for our ACL 2024 paper Are LLM-based Evaluators Confusing NLG Quality Criteria?

We release the following data and codes used in our work:

  • Aspect criteria (including the different descriptions): aspect_criteria.json
  • Prompts generated for LLM-based evaluation: eval_prompt.py
  • Prompts (including the examples and instructions) and codes of using rules for perturbation constructions: Perturbations/
  • Data for experiments (including the refined reference, perturbed texts, and other information): data_all.json
  • Experimental results for three LLMs (including the average rating for each test item): Eval_results/

Citation

@inproceedings{hu2024llm,
  title={Are LLM-based Evaluators Confusing NLG Quality Criteria?},
  author={Hu, Xinyu and Gao, Mingqi and Hu, Sen and Zhang, Yang and Chen, Yicheng and Xu, Teng and Wan, Xiaojun},
  booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={9530--9570},
  year={2024}
}

About

The official repository for our ACL 2024 paper: Are LLM-based Evaluators Confusing NLG Quality Criteria?

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages