Research Article

Natural Language Processing Algorithms for Normalizing Expressions of Synonymous Symptoms in Traditional Chinese Medicine

Table 7

Comparison of the BERT-Classification model with other models.

ModelAccuracyPrecisionRecallF1-score

Jaccard similarity0.491880.652510.547220.54317
Word2Vec with cosine0.6424 ± 0.00190.7365 ± 0.00930.6906 ± 0.00360.6724 ± 0.0047
DNorm0.8572 ± 0.00500.8694 ± 0.00870.8602 ± 0.00720.8555 ± 0.0061
Transition-based model0.7980 ± 0.00560.8256 ± 0.00810.7970 ± 0.00510.7937 ± 0.0050
RNN-CNNs-CRF0.8852 ± 0.00360.8755 ± 0.00350.8724 ± 0.00320.8645 ± 0.0034
BERT-based ranking0.9264 ± 0.00570.9413 ± 0.00560.9321 ± 0.00720.9313 ± 0.0065
BERT-Classification0.9300±0.00190.9473±0.00230.9380±0.00210.9378±0.0021

Note. The test results are expressed as mean ± SD. Each model was repeated 10 times, except for Jaccard similarity. , compared with BERT-Classification.