Skip to content
/ CIDer Public

Codes for "Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts".

License

Notifications You must be signed in to change notification settings

gw-zhong/CIDer

Repository files navigation

Python 3.10 Pytorch 2.5

Codes for Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts.

Usage

Clone the repository

git clone https://github.com/gw-zhong/CIDer.git

Download the datasets

Download the BERT models

Preparation

Create (empty) folder for results:

cd cider
mkdir results

and set the data_path and the model_path correctly in main.py, main_eval.py, and main_run.py.

Hyperparameter tuning

python main.py --[FLAGS]

Or, you can use the bash script for tuning:

bash scripts/run_all.sh

Please note that run_all.sh contains all the tasks and uses 8 GPUs for hyperparameter tuning. You should select one or several tasks for tuning according to your actual needs, instead of running all of them.

Evaluation

python main_eval.py --[FLAGS]

Guidance:

When conducting the evaluation, you need to correctly set the missing_mode in main_eval.py. The specific settings are as follows:

  • Our proposed RMFM: --missing_mode RMFM

  • Traditional RMFM: --missing_mode RMFM_same

  • RMM: --missing_mode RMM

  • TMFM: --missing_mode TMFM

  • STMFM: --missing_mode STMFM

  • SMM: --missing_mode RMFM_same and uncomment the sections in main_eval.py from line 169 to line 175 and line 188.

Single Training

python main_run.py --[FLAGS]

Reproduction

To facilitate the reproduction of the results in the paper, we have also uploaded the corresponding model weights:

You just need to run main_eval.py to reproduce the results.

Please note that when running the evaluation for the corresponding model, you should also modify the relevant task parameters in main_eval.py.

Citation

Please cite our paper if you find that useful for your research:

@article{zhong2025towards,
  title={Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts},
  author={Zhong, Guowei and Huan, Ruohong and Wu, Mingzhen and Liang, Ronghua and Chen, Peng},
  journal={arXiv preprint arXiv:2506.10452},
  year={2025}
}

Contact

If you have any question, feel free to contact me through [email protected] or [email protected].

Acknowledgment

Our code is based on MulT and SELF-MM. And our repartitioned MER OOD Datasets are based on CLUE. Thanks to their open-source spirit for saving us a lot of time.

About

Codes for "Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •