Codes for Calibrating Multimodal Consensus for Emotion Recognition.
git clone https://github.com/gw-zhong/CMC.git
Set the data_path
and the model_path
correctly in main.py
.
python main.py --dataset SIMS --transformer_layers 5 --nhead 4 --out_dropout 0.4 --is_pseudo
python main.py --dataset SIMS-v2 --transformer_layers 4 --nhead 2 --out_dropout 0.3 --is_pseudo
python main.py --dataset MOSI --transformer_layers 2 --nhead 4 --out_dropout 0.5 --is_pseudo
python main.py --dataset MOSEI --transformer_layers 2 --nhead 4 --out_dropout 0.0 --is_pseudo
Or use ground truth unimodal label (CMC-GT):
python main.py --dataset SIMS --transformer_layers 1 --nhead 2 --out_dropout 0.1 --finetune
python main.py --dataset SIMS-v2 --transformer_layers 4 --nhead 8 --out_dropout 0.1 --finetune
python main.py --dataset SIMS --transformer_layers 5 --nhead 4 --out_dropout 0.4 --is_pseudo --finetune --pretrained_model
python main.py --dataset SIMS-v2 --transformer_layers 4 --nhead 2 --out_dropout 0.3 --is_pseudo --finetune --pretrained_model
python main.py --dataset MOSI --transformer_layers 2 --nhead 4 --out_dropout 0.5 --is_pseudo --finetune --pretrained_model
python main.py --dataset MOSEI --transformer_layers 2 --nhead 4 --out_dropout 0.0 --is_pseudo --finetune --pretrained_model
Or use ground truth unimodal label (CMC-GT):
python main.py --dataset SIMS --transformer_layers 1 --nhead 2 --out_dropout 0.1 --finetune --pretrained_model
python main.py --dataset SIMS-v2 --transformer_layers 4 --nhead 8 --out_dropout 0.1 --finetune --pretrained_model
bash script.sh
python main_tune.py --dataset [SIMS/SIMS-v2/MOSI/MOSEI] [--is_pseudo]
Note:
with --is_pseudo
: training the CMC model;
without --is_pseudo
: training the CMC-GT model (currently only supporting SIMS
/SIMS-v2
).
To facilitate the reproduction of the results in the paper, we have also uploaded the corresponding model weights:
- BaiduYun Disk
code: 2rtn
If you have any question, feel free to contact me through [email protected].