Vincent Auriau1, 2, Khaled Belahcène1, Emmanuel Malherbe2, Vincent Mousseau1
1 MICS - CentraleSupélec, 2 Artefact Research Center
Abstract: Additive preference representation is standard in Multiple Criteria Decision Analysis, and learning such a preference model dates back from the UTA method. In this seminal work, an additive piece-wise linear model is inferred from a learning set composed of pairwise comparisons. In this setting, the learning set is provided by a single Decision-Maker (DM), and an additive model is inferred to match the learning set. We extend this framework to the case where (i) multiple DMs with heterogeneous preferences provide part of the learning set, and (ii) the learning set is provided as a whole without knowing which DM expressed each pairwise comparison. Hence, the problem amounts to inferring a preference model for each DM and simultaneously ``discovering'' the segmentation of the learning set. In this paper, we show that this problem is computationally difficult. We propose a mathematical programming based resolution approach to solve this Preference Learning and Segmentation problem (PLS). We also propose a heuristic to deal with large datasets. We study the performance of both algorithms through experiments using synthetic and real data.
Clone this repository:
git clone https://github.com/artefactory/learning-heterogeneous-preferences.git
Install the dependencies:
cd learning-heterogeneous-preferences
pip install -r requirements.txt
In order to run the experiments with synthetic data you can use the following command:
python run_synthetic_experiments.py save_synth_xps --repetitions 4 --n_clusters 1 2 3 \
--learning_set_size 128 1024 --error 0 5
It will generate data with four different random seeds and compute the models (milo and heuristic) for all combinations of parameters. Data, models and results are saved in the directory save_synth_xps
:
- n_clusters = [1, 2, 3]
- n_criteria=6
- learning_set_size=[128, 1024]
- error=[0, 5]
The notebook notebooks/synthetic_results.ipynb shows how to read and analyse the results.
The stated preferences for car dataset used in the paper can be downloaded here.
It is also part of the choice-learn package that can be installed with pip install choice-learn
.
Then, running the following command:
python run_cars_experiments.py save_cars_xps --repetitions 2 --n_clusters 2 3 4 5 \
--learning_set_size 128 512
It will estimate the MILO and heuristic models with:
- learning_set_size=[128, 512]
- 2 different random seeds for train/test split
- n_clusters=[2, 3, 4, 5]
The notebook notebooks/cars_results.ipynb shows how to read and analyse the results.
The different models can be used on your own data as follows:
from python.models import UTA, ClusterUTA
from python.heuristics import Heuristic
model = ClusterUTA(
n_pieces=5,
n_clusters=3,
epsilon=0.05
)
history = model.fit(X, Y)
print(model.predict_utility(X))
All the models have lookalike signatures, in particular, in .fit(X, Y)
, X and Y must be the matrixes of same shape where:
- X[i] represents the features of alternative
$x_i$ - Y[i] represents the features of alternative
$y_i$ -
$x_i$ has been preferred to$y_i$
More details are given in the docstrings of the models if you want to better understand the different hyper-parameters. The notebook notebooks/train_on_other_data.ipynb also shows an example.
This work is under the MIT license.
If you find this work useful for your research, please cite our paper:
@InProceedings{AuriauPLS:2024,
author="Auriau, Vincent
and Belahc{\`e}ne, Khaled
and Malherbe, Emmanuel
and Mousseau, Vincent",
editor="Freeman, Rupert
and Mattei, Nicholas",
title="Learning Multiple Multicriteria Additive Models from Heterogeneous Preferences",
booktitle="Algorithmic Decision Theory",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="207--224",
isbn="978-3-031-73903-3"
}