Codes for TriSAT: Trimodal Representation Learning for Multimodal Sentiment Analysis (Accepted by IEEE/ACM Transactions on Audio, Speech and Language Processing).
git clone https://github.com/gw-zhong/TriSAT.git
-
CMU-MOSI & CMU-MOSEI (Glove) [align & unaligned] (which are not available now)
Alternatively, you can download these datasets from:
- BaiduYun Disk
code: zpqk
For convenience, we also provide the BERT pre-training model that we fine-tuned with:
- BaiduYun Disk
code: e7mw
Create (empty) folders for data, results, and models:
cd TriSAT
mkdir input results models
and put the downloaded data in 'input/'.
python main_[DatasetName].py [--FLAGS]
Please cite our paper if you find that useful for your research:
@article{huan2024trisat,
title={TriSAT: Trimodal Representation Learning for Multimodal Sentiment Analysis},
author={Huan, Ruohong and Zhong, Guowei and Chen, Peng and Liang, Ronghua},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2024},
publisher={IEEE}
}
If you have any question, feel free to contact me through [email protected].