Research Article

A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set

Table 1

Method comparison.

ModelPrecision (%)Recall (%)F1-score (%)Time (min)Total paramsTrainable paramsEpochSize (MB)

BERT + BI-GRU605454241109,920,1451,609,8736419
BERT-BI-LSTM615354380109,247,003936,73111417
BERT + CNN585152153108,404,59194,3194413
BERT–CNN–LSTM575152301109,200,743890,4718416
BERT-LSTM-CNN524444189111,918,8643,608,5925427
DistilBERT + BI-GRU5954548566,800,7851,609,8735261
DistilBERT BI-LSTM6054548466,127,643936,7315252
DistilBERT-CNN5653536965,285,23194,3194249
DistilBERT–CNN–LSTM59505113266,081,383890,4717252
DistilBERT-LSTM-CNN5646487568,799,5043,608,5924268