This repository contains the code for our Information Fusion paper "End-to-end multimodal affect recognition in real-world environments". If you use this codebase in your experiments please cite:
P., Tzirakis, J., Chen, S. Zafeiriou, and B. Schuller. "End-to-end multimodal affect recognition in real-world environments." Information Fusion 68 (2021): 46-53.
(https://www.sciencedirect.com/science/article/pii/S1566253520303808)
We provide in each folder training/evaluation scripts for each modality separately. Each folder contains steps to run the experiments.
Below are listed the required modules to run the code.
- Python == 3.7
- Tensorflow == 1.15
- Gensim == 4.0.1
- NLTK == 3.6.2
- Librosa == 0.8.0
CODE FOR VISUAL AND MUTLIMODAL MODELS WILL BE UPLOADED SOON.