Natural Language Processing Algorithms for Normalizing Expressions of Synonymous Symptoms in Traditional Chinese Medicine
Table 7
Comparison of the BERT-Classification model with other models.
Model
Accuracy
Precision
Recall
F1-score
Jaccard similarity
0.49188
0.65251
0.54722
0.54317
Word2Vec with cosine
0.6424 ± 0.0019
0.7365 ± 0.0093
0.6906 ± 0.0036
0.6724 ± 0.0047
DNorm
0.8572 ± 0.0050
0.8694 ± 0.0087
0.8602 ± 0.0072
0.8555 ± 0.0061
Transition-based model
0.7980 ± 0.0056
0.8256 ± 0.0081
0.7970 ± 0.0051
0.7937 ± 0.0050
RNN-CNNs-CRF
0.8852 ± 0.0036
0.8755 ± 0.0035
0.8724 ± 0.0032
0.8645 ± 0.0034
BERT-based ranking
0.9264 ± 0.0057
0.9413 ± 0.0056
0.9321 ± 0.0072
0.9313 ± 0.0065
BERT-Classification
0.9300±0.0019
0.9473±0.0023
0.9380±0.0021
0.9378±0.0021
Note. The test results are expressed as mean ± SD. Each model was repeated 10 times, except for Jaccard similarity. , compared with BERT-Classification.