Research Article
Integrating BERT Embeddings and BiLSTM for Emotion Analysis of Dialogue
Table 2
Comparison of the performance of all models.
| Models | GFLOPs | Model size (# params) (M) |
| BERT-BiLSTM-CNN | 583.4 | 115.72 | BERT-BiGRU-CNN | 594.7 | 117.50 | BERT-BiLSTM-attention | 589 | 116.58 | BERT-(BiLSTM + CNN) | 594.7 | 117.50 | BERT-(BiLSTM-attention + CNN) | 594.7 | 117.50 | BiBERT-BiLSTM | 1132.6 | 116.58 |
|
|