This work was done as part of the seminar in the course “Bio-Inspired Artificial Intelligence” in my master studies.
Using the WESAD Dataset for wearable stress and affect detection, we compared different methods of multi-modal fusion in neural networks. Results employing the Gated Multimodal Fusion Unit (GMU), linear sum and concatenation are presented and compared against AdaBoost as a baseline. In our experiment, all neural fusion methods achieved similar performance, while being substantially better than the AdaBoost baseline.
To find out if the GMU behaves differently when random noise is introduced, we carried out a second experiment. Noise was applied on one modality at inference time. Cosequently, the performance of all methods dropped, but, surprisingly, the models using the gated fusion module deteriorated most notably, resulting in below chance-level accuracy.