Research Article

Multimodal Sentiment Analysis Based on Cross-Modal Attention and Gated Cyclic Hierarchical Fusion Networks

Table 4

Performance tables for different cross-modal notes on MOSI and MOSEI datasets.

TaskMOSIMOSEI
MAE (↓)Corr (↑)Acc-2 (↑)F1-score (↑)MAE (↓)Corr (↑)Acc-2 (↑)F1-score (↑)

1.4420.21053.8146.48/1.3150.19760.13/61.2560.48/59.38
1.3210.23362.3357.85/58.941.2440.18258.48/59.4761.63/61.48
0.8960.39368.5264.44/65.380.8150.21364.15/63.1864.85/64.25
0.9010.38471.4969.12/67.200.8430.24163.48/63.1463.54/63.89
0.9760.22373.62/71.4071.04/64.310.8210.21361.84/62.3761.23/61.66
0.9570.38174.22/72.6771.04/65.860.7840.23063.24/62.5661.05/60.72
0.8190.48676.80/76.0175.72/74.840.7630.36172.18/72.5674.37/74.03