Skip to content

Dissertation project for MSc Speech and Language Processing: Domain adaptation as feature extraction for multimodal emotion recognition

Notifications You must be signed in to change notification settings

bcm628/dissertation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dissertation Project

MSc Speech and Language Processing, University of Edinburgh

Domain adaptation as feature extraction for mulitmodal emotion recognition

Code

Domain adaptation code based on pytorch_DANN by CuthbertCai. Original code based on publication by Ganin and Lempitsky, 2015

Factorized Multimodal Transformer (Zadeh et al, 2019) code from A2Zadeh

Data

This project uses two datasets for domain adaptation: CMU-MOSEI and IEMOCAP. CMU-MOSEI data is publicly available and can be found here.

In this project, I used the final aligned data from the ACL20 Challenge.

IEMOCAP data is not publicly available and requires permission to be used. I used pre-processed and aligned data from CMU. More information can be found here.

Updates

Currently, only IEMOCAP acoustic data consistenly converges using DANN.

Using DANN as feature extraction for acoustic features improves over cross-corpus testing without domain adaptation.

Results will be added soon!

About

Dissertation project for MSc Speech and Language Processing: Domain adaptation as feature extraction for multimodal emotion recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published