Codes for Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts.
git clone https://github.com/gw-zhong/CIDer.git
- IID: CMU-MOSI & CMU-MOSEI (BERT) [align & unaligned]
- OOD: CMU-MOSI & CMU-MOSEI (BERT) [align & unaligned]
- BaiduYun Disk
code: 19db
- Hugging Face
- BaiduYun Disk
- Cross-dataset: CMU-MOSI & CMU-MOSEI (BERT) [align]
- BaiduYun Disk
code: e7mw
Create (empty) folder for results:
cd cider
mkdir results
and set the data_path
and the model_path
correctly in main.py
, main_eval.py
, and main_run.py
.
python main.py --[FLAGS]
Or, you can use the bash script for tuning:
bash scripts/run_all.sh
Please note that run_all.sh
contains all the tasks and uses 8 GPUs for hyperparameter tuning. You should select one or several tasks for tuning according to your actual needs, instead of running all of them.
python main_eval.py --[FLAGS]
Guidance:
When conducting the evaluation, you need to correctly set the missing_mode
in main_eval.py
. The specific settings are as follows:
-
Our proposed RMFM:
--missing_mode RMFM
-
Traditional RMFM:
--missing_mode RMFM_same
-
RMM:
--missing_mode RMM
-
TMFM:
--missing_mode TMFM
-
STMFM:
--missing_mode STMFM
-
SMM:
--missing_mode RMFM_same
and uncomment the sections inmain_eval.py
from line 169 to line 175 and line 188.
python main_run.py --[FLAGS]
To facilitate the reproduction of the results in the paper, we have also uploaded the corresponding model weights:
- BaiduYun Disk
code: 885a
- Hugging Face
You just need to run main_eval.py
to reproduce the results.
Please note that when running the evaluation for the corresponding model, you should also modify the relevant task parameters in main_eval.py
.
Please cite our paper if you find that useful for your research:
@article{zhong2025towards,
title={Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts},
author={Zhong, Guowei and Huan, Ruohong and Wu, Mingzhen and Liang, Ronghua and Chen, Peng},
journal={arXiv preprint arXiv:2506.10452},
year={2025}
}
If you have any question, feel free to contact me through [email protected] or [email protected].
Our code is based on MulT and SELF-MM. And our repartitioned MER OOD Datasets are based on CLUE. Thanks to their open-source spirit for saving us a lot of time.