Research Article
Real-Time Human-Music Emotional Interaction Based on Deep Learning and Multimodal Sentiment Analysis
Figure 9
The Extended Cohn-Kanade Dataset (a) and Bimodal Dataset (b). The CK+ dataset images have been preprocessed with face extraction, colour removal, and equal scale cutting. The weight occupied by the bimodal in decision-level fusion is determined by individually matching the emotion text with the color face image in the bimodal emotion dataset that I created.
(a) |
(b) |