Research Article
Self-Information Loss Compensation Learning for Machine-Generated Text Detection
Table 3
Comparison of results with the F1 score and accuracy.
| Sequence length | 64 | 128 | 256 | 512 | Models | F1 score | ACC | F1 score | ACC | F1 score | ACC | F1 score | ACC |
| FastText | 0.7775 | 0.8131 | 0.7861 | 0.8200 | 0.7853 | 0.8169 | 0.7822 | 0.8125 | TextCNN | 0.7811 | 0.8067 | 0.7848 | 0.8131 | 0.7905 | 0.8183 | 0.7858 | 0.8128 | LSTM | 0.8414 | 0.8544 | 0.7907 | 0.8219 | 0.5923 | 0.6811 | 0.0044 | 0.5003 | RCNN | 0.8570 | 0.8650 | 0.8982 | 0.9033 | 0.8977 | 0.9036 | 0.9021 | 0.9061 | LSTM + Attention | 0.8547 | 0.8633 | 0.8841 | 0.8872 | 0.8770 | 0.8819 | 0.8844 | 0.8889 | Our model | 0.8731 | 0.8792 | 0.9072 | 0.9103 | 0.9003 | 0.9050 | 0.9131 | 0.9158 |
|
|