Skip to content

Latest commit

 

History

History
23 lines (15 loc) · 1.03 KB

README.md

File metadata and controls

23 lines (15 loc) · 1.03 KB

End-to-end multimodal affect recognition in real-world environments

This repository contains the code for our Information Fusion paper "End-to-end multimodal affect recognition in real-world environments". If you use this codebase in your experiments please cite:

P., Tzirakis, J., Chen, S. Zafeiriou, and B. Schuller. "End-to-end multimodal affect recognition in real-world environments." Information Fusion 68 (2021): 46-53. (https://www.sciencedirect.com/science/article/pii/S1566253520303808)

Content

We provide in each folder training/evaluation scripts for each modality separately. Each folder contains steps to run the experiments.

Requirements

Below are listed the required modules to run the code.

  • Python == 3.7
  • Tensorflow == 1.15
  • Gensim == 4.0.1
  • NLTK == 3.6.2
  • Librosa == 0.8.0

CODE FOR VISUAL AND MUTLIMODAL MODELS WILL BE UPLOADED SOON.

Implementation of the audio/visual/mutlimodal methods in PyTorch (along with pretrain models) can be found in our End2You toolkit