Skip to content
/ GTA Public

Code for paper "GTA: Gated Toxicity Avoidance for LM Performance Preservation" accepted to Findings of EMNLP 2023

Notifications You must be signed in to change notification settings

HeegyuKim/GTA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GTA: Gated Toxicity Avoidance for LM Performance Preservation accepted to Findings of EMNLP 2023

This repository contains code for the paper "GTA: Gated Toxicity Avoidance for LM Performance Preservation" accepted to Findings of EMNLP 2023

Dependencies

  1. our code is based on python 3.9
  2. Install required packages using pip
pip install -r requirements.txt

For using PPLM, you need to make seperate environment. Setup a enviroment with

pip install -r requirements_pplm.txt
  1. Download detoxifier models from link and unzip in models/

Text generation using detoxifier

If you want few-shot generation in gpt2-large LM, you need to specify prompt directory. There are already generated prompts in prompt/fewshot/v1 directory. If you want another prompt, use code in generate_prompt.py.

The run_ft.py and run_fewshot.py code generates and stores N texts of 13 topics using detoxifier. There are 13 topics of three topic groups.

Topic group topics dataset
Sentiment positive / negative yelp-polarity
Emotion anger / fear / surprise / joy / sadness / love emotion
News business / entertainment / politics / sport / tech bbc-news

This example generate texts without detoxifier.

# gpt2-small fine-tuned model generation
python run_ft.py \
    --model-type "gpt2" \
    --n 1000 \
    $OUTPUT

# gpt2-large fewshot generation
python run_fewshot.py --model "gpt2-large" \
    --model-type "gpt2" \
    --n 100 \
    --prompt-dir prompt/fewshot/v1 \
    $OUTPUT

And you can change top-p and top-k argument, top-k is default 50, top-p is default 0.9(small) and 1.0(large).

PPLM

You cannot change parameters in argument. If you want to change other parameters, change PPLM/pplm_gated.py and PPLM/pplm.py.

cd PPLM

count=3

# for gated detoxifier
python3 pplm_gated.py \
    --top_k 50 \
    --top_p 0.9 \
    --print-result \
    --n $count \
    --label-class 1 \
    --sample \
    --gate_threshold 0.005 \
    --output-file "../output/test/pplm_gated_$count.jsonl"

python3 pplm.py \
    --top_k 50 \
    --top_p 0.9 \
    --print-result \
    --n $count \
    --label-class 1 \
    --sample \
    --output-file "../output/test/pplm_$count.jsonl"

GeDi

Only you can change disc_weight(omega) in argument. If you want to change other parameters, change GeDi/generator.py.

OMEGA=30

# gpt2-small
python run_ft.py \
    --model-type "gedi" \
    --disc_weight $OMEGA \
    --n $N \
    output/small/gedi.jsonl    

# gpt2-large
python run_fewshot.py \
    --model "gpt2-large" \
    --model-type "gedi" \
    --n 100 \
    --disc_weight $OMEGA \
    --prompt-dir prompt/fewshot/v1 \
    output/large/gedi.jsonl

DExperts

ALPHA=1.0

# gpt2-small
python run_ft.py \
    --model-type "dexperts" \
    --alpha $ALPHA \
    --n 100 \
    output/small/dexperts.jsonl    

# gpt2-large fewshot
python run_fewshot.py \
    --model "gpt2-large" \
    --model-type "dexperts" \
    --n 100 \
    --alpha $ALPHA \
    --prompt-dir prompt/fewshot/v1 \
    output/large/dexperts.jsonl

DisCup

Only gpt2-large generation is available for DisCup

# Gated Discup large
python run_fewshot.py \
    --model "gpt2-large" \
    --model-type "discup" \
    --n 100 \
    --ranking_scope 10 \
    --prompt-dir prompt/fewshot/v1 \
    output/large/discup_gated.jsonl

Text generation using gated detoxifier

You can add a gate in any detoxifier. just add gate-model argument

GATE_MODEL="s-nlp/roberta_toxicity_classifier"
GATE_THRESHOLD=0.005

OMEGA=30

# Gated GeDi small
python run_ft.py \
    --model-type "gedi" \
    --disc_weight 30 \
    --n $N \
    --gate-model $GATE \
    --gate-threshold $GATE_THRESHOLD \
    output/small/gedi_gated.jsonl    

# gated-discup large
python run_fewshot.py \
    --model "gpt2-large" \
    --model-type "discup" \
    --n 100 \
    --ranking_scope 10 \
    --gate-model $GATE \
    --gate-threshold $GATE_THRESHOLD \
    --prompt-dir prompt/fewshot/v1 \
    output/large/discup.jsonl

Evaluating generated texts

Receive your perspective api key from https://perspectiveapi.com/ or use classifier instead.

export PERSPECTIVE_API_KEY=your_api_key

# for gpt2-small
python eval.py output/small/gedi.jsonl

# for gpt2-large
python eval.py --large output/small/gedi.jsonl

# if you doesn't want to use perspective api for evaluating toxicity
python eval.py --no-perspective output/small/gedi.jsonl

remove '\r' (For Windows)

sed -i 's/\r$//' ./generate_large.sh

About

Code for paper "GTA: Gated Toxicity Avoidance for LM Performance Preservation" accepted to Findings of EMNLP 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published