| Number | Network model | Hyperparameters |
| Model #1 | F3SNet | The dimension of embedding layer = 100, the number of word LSTM hidden unit = 100, the number of sentence LSTM hidden unit = 50, dropout = 0.5, dropout_recurrent = 0.5, batch size = 128, and epoch = 50 | Model #2 | Embedding + LSTM + Self_Attention + Dense | The dimension of embedding layer = 100, the number of LSTM hidden unit = 100, dropout = 0.5, dropout_recurrent = 0.5, batch size = 128, and epoch = 50 | Model #3 | Embedding + Self_Attention + Self_Attention + Dense | The dimension of embedding layer = 100, dropout = 0.5, batchsize = 128, epoch = 50. | Model #4 | LSTM + Self_Attention + BiLSTM + Self_Attention + Dense | The number of word LSTM hidden unit = 100, the number of sentence LSTM hidden unit = 100, dropout = 0.5, dropout_recurrent = 0.5, batch size = 128, and epoch = 50 | Model #5 | Embedding + Multi-head Attention + Dense ([22]) | The dimension of embedding layer = 100, heads = 8, head_size = 32, dropout = 0.5, batchsize = 128, epoch = 50. | Model #6 | LSTM + LSTM + Dense ([20]) | The number of the first LSTM hidden unit = 50, the number of the second LSTM hidden unit = 50, batch size = 128, and epoch = 50 |
|
|