| Study | Building feature vectors | Techniques used | Key findings |
| Huang et al. [11] | Bag of words, Word2vec, and TF-IDF | SVM, RF, GBDT, XGBoost | GBDT has the best classification performance | Barrientos et al. [26] | Bag of words, Word2vec, and TF-IDF | SVM, LR, KNN, RF | The best performance result was achieved by the combination of the text encoder TF-IDF and the SVM classifier with linear kernel | Zahoor et al. [27] | TF-IDF | NB, SVM, LR, RF | RF algorithm achieves the maximum accuracy in sentiment analysis | Yang et al. [28] | Word2vec | XGBoost | The accuracy of XGBoost model in predicting emotional polarity is 0.896 | Anisha et al. [29] | TF-IDF; bag of words | SVM, RF, NB, LR, RNN, LSTM, BiLSTM, CNN | The verification accuracy of LSTM method is the highest | Liu et al. [30] | Word2vec, Bert | SVM, CNN, LSTM, BiLSTM | BiLSTM model has a higher improvement in F1 value compared with other models | Duan et al. [31] | Bert | Dict-Bert | Dict-Bert model is better than the BERT-only model, especially when the training set is relatively small | Zeng et al. [32] | Word2vec | BiLSTM, SVM, RF, XGBOOST, LSTM | BiLSTM model achieved good results on F1 and accuracy | Wu et al. [33] | Bert | CNN, RNN, FastText, RCNN | On the basis of Bert, RCNN combined with attention mechanism is used to extract the context features of reviews text, which can improve the accuracy of model classification | Li et al. [34] | Bert | CNN, BiLSTM | BLSTM can solve the connection between the words and the semantic | Maslej-Krešňáková et al. [35] | TF-IDF, pretrained word embeddings | FFNN, CNN, LSTM, GRU, BiLSTM | The combined structure classification accuracy of BiLSTM + CNN network is high |
|
|