Exploring the Impact of Drivers' Emotion and Multi-task Learning on Takeover Behavior Prediction in Multimodal Environment
The code was refactored to integrate all experiments and baselines; please contact me if you find any bugs. Thanks.
- This study selected the EmoTake dataset to train Multi-TBP, which can be downloaded through this link.
- We have placed the EmoTake dataset in the "data" folder and the dataset description (README) is also included.
The paper's basic training environment for its results is Python 3.8, Pytorch 1.9.0 with a single NVIDIA RTX 3090. Notably, different hardware and software environments can cause the results to fluctuate.
-
For the dataset or task being executed, you can adjust necessary parameters in "opts.py", such as "--datasetName", "--labelType", "--num_class", or "--data_path", etc.
-
After adjusting the parameters, you can run this project with the following command:
python ./src/train.py
-
The output results will be saved in the "log" folder.
-
Since Multi-TBP is not limited by the number of modalities, input data dimensions, and fusion data dimensions, it can be changed uniformly. Therefore, Multi-TBP can be extended to other datasets or applied to new scenarios according to your needs.
-
If you want to change the dataset or usage scenario, please update the parameters in "opts.py".
-
We gratefully acknowledge the help of open-source projects used in this work 🎉🎉, including EmoTake, etc 😄.
Paper publication address:
Please cite our paper if you find it having other limitations and valuable for your research (卑微求引用 T^T) :
@inproceedings{feng2025exploring,
title={Exploring the Impact of Drivers' Emotion and Multi-task Learning on Takeover Behavior Prediction in Multimodal Environment: Exploring Takeover Behavior Prediction in Multimodal Environment},
author={Feng, Xinyu and Gu, Yu and Lin, Yuming and Cai, Yaojun},
booktitle={Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems},
pages={1--8},
year={2025}
}