Skip to content

This repository contains the code for the paper "From Detection to Explanation: Effective Learning Strategies for LLMs in Online Abusive Language Research", published at COLING 2025.

Notifications You must be signed in to change notification settings

ChiaraDiBonaventura/gllama-alarm

Repository files navigation

From Detection to Explanation: Effective Learning Strategies for LLMs in Online Abusive Language Research

Preprint

TL-DR: In this paper, we (1) demonstrate that LLM's implicit knowledge without an accurate learning strategy does not effectively capture multi-class abusive language; (2) we propose an easy and open source knowledge-guided learning strategy to evaluate the impact of adding external knowledge to LLMs; (3) building on this strategy, we release GLlama Alarm, a knowledge-guided version of Llama-2 instruction fine-tuned for multi-class abusive language detection and explanation generation; (4) we conduct an expert survey to evaluate LLMs in abusive language research, from detection to explanation.

Authors: Chiara Di Bonaventura, Lucia Siciliani, Pierpaolo Basile, Albert Meroño-Peñuela, Barbara McGillivray

Contact: [email protected]

📂 Repo info

This repository contains the code used to instruction-finetuning Llama-2.

📎 Citation

@inproceedings{di-bonaventura-etal-2025-detection,
    title = "From Detection to Explanation: Effective Learning Strategies for {LLM}s in Online Abusive Language Research",
    author = "Di Bonaventura, Chiara  and
      Siciliani, Lucia  and
      Basile, Pierpaolo  and
      Merono Penuela, Albert  and
      McGillivray, Barbara",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.141/",
    pages = "2067--2084",
    abstract = "Abusive language detection relies on understanding different levels of intensity, expressiveness and targeted groups, which requires commonsense reasoning, world knowledge and linguistic nuances that evolve over time. Here, we frame the problem as a knowledge-guided learning task, and demonstrate that LLMs' implicit knowledge without an accurate strategy is not suitable for multi-class detection nor explanation generation. We publicly release GLlama Alarm, the knowledge-Guided version of Llama-2 instruction fine-tuned for multi-class abusive language detection and explanation generation. By being fine-tuned on structured explanations and external reliable knowledge sources, our model mitigates bias and generates explanations that are relevant to the text and coherent with human reasoning, with an average 48.76{\%} better alignment with human judgment according to our expert survey."
}

📌 License

This project is licensed under the CC-BY-4.0 License.

About

This repository contains the code for the paper "From Detection to Explanation: Effective Learning Strategies for LLMs in Online Abusive Language Research", published at COLING 2025.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages