Code for the paper Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection.
For details on the approach, architecture and idea, please see the published paper.
@inproceedings{spliethover-etal-2025-adaptive,
title = {Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection},
author = {Splieth{\"o}ver, Maximilian and Knebler, Tim and Fumagalli, Fabian and Muschalik, Maximilian and Hammer, Barbara and H{\"u}llermeier, Eyke and Wachsmuth, Henning},
year = 2025,
month = apr,
booktitle = {Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Albuquerque, New Mexico},
pages = {2421--2449},
isbn = {979-8-89176-189-6},
url = {https://aclanthology.org/2025.naacl-long.122/},
editor = {Chiruzzo, Luis and Ritter, Alan and Wang, Lu}
}
All experiments were conducted using Python 3.10.4
conda create --name prompt-compositions python=3.10.14 -y
conda activate prompt-compositions
# choose the appropriate CUDA version for your environment
conda install cuda -c nvidia/label/cuda-12.1.0
pip install -r requirements.txt
# we need to upgrade outlines to 0.0.39 in order to integrate with vLLM
pip install outlines==0.0.39 --force-reinstall --no-deps
# custom library for easier inference
pip install -e ./src/flex-inferTo run the experiments, download the pre-trained LLMs from Hugging Face:
- Mistral: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
- Command R: https://huggingface.co/CohereForAI/c4ai-command-r-v01
- Llama 3: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct
You can set the path to the models using the models variable at the beginning of the run_experiments.sh script.
To obtain the similarity-based few-shot examples for the bias detection experiments, you’ll need to download the all-mpnet-base-v2 embedding model. By default, the model is expected at models/all-mpnet-base-v2, but you can customize this path by modifying the SENTENCE_TRANSFORMER_MODELS variable in the src/bias_detection/config/settings.py file.
The model is available on Hugging Face: https://huggingface.co/sentence-transformers/all-mpnet-base-v2
The datasets can be found here:
- SBIC: https://maartensap.com/social-bias-frames/
- CobraFrame: https://huggingface.co/datasets/cmu-lti/cobracorpus
- Stereoset: https://huggingface.co/datasets/McGill-NLP/stereoset
- SemEval-2014-ABSA: https://huggingface.co/datasets/FangornGuardian/semeval-2014-absa
Save the datasets in the datasets/ directory and specify their paths using the DATASET_PATHS variable in the src/bias_detection/config/settings.py file.
To run the social bias detection experiments, execute the run_experiments.sh script with the path to the Python script for each dataset. The Python scripts for each dataset can be found in the experiments/ directory.
# path to the python script for the experiment: experiments/sbic_greedy.py
# prefix for the output files to distinguish the results from different runs: sbic-greedy_
# data split: test
# number of GPUs used: 1
# more arguments and their default values can be found in the script
./run_experiments.sh experiments/sbic_greedy.py sbic-greedy_ test 1The trained models and their predictions on all datasets evaluated in the paper can be found on Hugging Face: