You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Contrastive Preference Optimization (CPO) as introduced in the paper [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417) by Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. At a high-level, CPO trains models to
6
-
avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat.
5
+
## Overview
7
6
8
-
CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.
7
+
Contrastive Preference Optimization (CPO) as introduced in the paper [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417) by [Haoran Xu](https://huggingface.co/haoranxu), [Amr Sharaf](https://huggingface.co/amrsharaf), [Yunmo Chen](https://huggingface.co/yunmochen), Weiting Tan, Lingfeng Shen, Benjamin Van Durme, [Kenton Murray](https://huggingface.co/Kenton), and [Young Jin Kim](https://huggingface.co/ykim362). At a high-level, CPO trains models to avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat.
9
8
10
-
## SimPO
11
-
The [SimPO](https://huggingface.co/papers/2405.14734) method is also implemented in the `CPOTrainer`. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, we can use SimPO easily by turning on `loss_type="simpo"` and `cpo_alpha=0` in the `CPOConfig`.
9
+
CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.
12
10
13
-
## CPO-SimPO
14
-
We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at [CPO-SimPO Github](https://github.com/fe1ixxu/CPO_SIMPO). To use this method, simply enable SimPO by setting `loss_type="simpo"` and a non-zero `cpo_alpha` in the CPOConfig.
11
+
## Quick start
15
12
16
-
## Expected dataset format
13
+
This example demonstrates how to train a model using the CPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model. We use the preference data from the [Capybara dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the data in the dataset here:
17
14
18
-
The CPO trainer expects a format identical to the DPO trainer, which should include three entries. These entries should be named as follows:
19
-
20
-
-`prompt`
21
-
-`chosen`
22
-
-`rejected`
23
-
24
-
for example:
25
-
26
-
```py
27
-
cpo_dataset_dict = {
28
-
"prompt": [
29
-
"hello",
30
-
"how are you",
31
-
"What is your name?",
32
-
"What is your name?",
33
-
"Which is the best programming language?",
34
-
"Which is the best programming language?",
35
-
"Which is the best programming language?",
36
-
],
37
-
"chosen": [
38
-
"hi nice to meet you",
39
-
"I am fine",
40
-
"My name is Mary",
41
-
"My name is Mary",
42
-
"Python",
43
-
"Python",
44
-
"Java",
45
-
],
46
-
"rejected": [
47
-
"leave me alone",
48
-
"I am not fine",
49
-
"Whats it to you?",
50
-
"I dont have a name",
51
-
"Javascript",
52
-
"C++",
53
-
"C++",
54
-
],
55
-
}
56
-
```
57
-
where the `prompt` contains the context inputs, `chosen` contains the corresponding chosen responses and `rejected` contains the corresponding negative (rejected) responses. As can be seen a prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays.
The CPO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function.
22
+
Below is the script to train the model:
61
23
62
-
## Using the `CPOTrainer`
63
-
For a detailed example have a look at the `examples/scripts/cpo.py` script. At a high level we need to initialize the `CPOTrainer` with a `model` we wish to train. **Note that CPOTrainer eliminates the need to use the reference model, simplifying the optimization process.** The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above.
24
+
```python
25
+
# train_cpo.py
26
+
from datasets import load_dataset
27
+
from trl import CPOConfig, CPOTrainer
28
+
from transformers import AutoModelForCausalLM, AutoTokenizer
64
29
65
-
```py
66
-
training_args = CPOConfig(
67
-
beta=0.1,
68
-
)
30
+
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
Given the preference data, the `CPOTrainer` uses the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression.
45
+
## Expected dataset format
86
46
87
-
The [RSO](https://huggingface.co/papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co/papers/2305.10425) paper. The `CPOTrainer` can be switched to this loss via the `loss_type="hinge"` argument and the `beta` in this case is the reciprocal of the margin.
47
+
CPO requires a [preference dataset](dataset_formats#preference). The [`CPOTrainer`] supports both [conversational](dataset_formats#conversational-dataset-format) and [standard](dataset_formats#standard-dataset-format) dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
88
48
89
-
The [IPO](https://huggingface.co/papers/2310.12036) authors provide a deeper theoretical understanding of the CPO algorithms and identify an issue with overfitting and propose an alternative loss which can be used via the `loss_type="ipo"` argument to the trainer. Note that the `beta` parameter is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike CPO which is summed only).
49
+
## Example script
90
50
91
-
### For Mixture of Experts Models: Enabling the auxiliary loss
51
+
We provide an example script to train a model using the CPO method. The script is available in [`examples/scripts/cpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/cpo.py)
92
52
93
-
MOEs are the most efficient if the load is about equally distributed between experts.
94
-
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
53
+
To test the CPO script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized), run the following command:
95
54
96
-
This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig).
97
-
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001).
55
+
```bash
56
+
accelerate launch examples/scripts/cpo.py \
57
+
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
58
+
--dataset_name trl-lib/ultrafeedback_binarized \
59
+
--num_train_epochs 1 \
60
+
--logging_steps 25 \
61
+
--output_dir Qwen2-0.5B-CPO
62
+
```
98
63
99
-
## Logging
64
+
## Logged metrics
100
65
101
66
While training and evaluating we record the following reward metrics:
102
67
@@ -106,6 +71,34 @@ While training and evaluating we record the following reward metrics:
106
71
*`rewards/margins`: the mean difference between the chosen and corresponding rejected rewards
107
72
*`nll_loss`: the mean negative log likelihood loss of the policy model for the chosen responses
108
73
74
+
## CPO variants
75
+
76
+
### Simple Preference Optimization (SimPO)
77
+
78
+
The [SimPO](https://huggingface.co/papers/2405.14734) method is also implemented in the [`CPOTrainer`]. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, we can use SimPO easily by turning on `loss_type="simpo"` and `cpo_alpha=0` in the [`CPOConfig`].
79
+
80
+
### CPO-SimPO
81
+
82
+
We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at [CPO-SimPO GitHub](https://github.com/fe1ixxu/CPO_SIMPO). To use this method, simply enable SimPO by setting `loss_type="simpo"` and a non-zero `cpo_alpha` in the [`CPOConfig`].
83
+
84
+
## Loss functions
85
+
86
+
The CPO algorithm supports several loss functions. The loss function can be set using the `loss_type` parameter in the [`CPOConfig`]. The following loss functions are supported:
| `"sigmoid"` (default) | Given the preference data, we can fit a binary classifier according to the Bradley-Terry model and in fact the [DPO](https://huggingface.co/papers/2305.18290) authors propose the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression. |
91
+
| `"hinge"` | The [RSO](https://huggingface.co/papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co/papers/2305.10425) paper. In this case, the `beta` is the reciprocal of the margin. |
92
+
| `"ipo"` | The [IPO](https://huggingface.co/papers/2310.12036) authors provide a deeper theoretical understanding of the DPO algorithms and identify an issue with overfitting and propose an alternative loss. In this case, the `beta` is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike DPO which is summed only). |
93
+
94
+
### For Mixture of Experts Models: Enabling the auxiliary loss
95
+
96
+
MOEs are the most efficient if the load is about equally distributed between experts.
97
+
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
98
+
99
+
This option is enabled by setting `output_router_logits=True` in the model config (e.g. [`~transformers.MixtralConfig`]).
100
+
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: `0.001`) in the model config.
Choosing the right dataset format depends on the task you are working on and the specific requirements of the TRL trainer you are using. Below is a brief overview of the dataset formats supported by each TRL trainer.
0 commit comments