Research Article

[Retracted] Gradient Descent Optimization in Deep Learning Model Training Based on Multistage and Method Combination Strategy

Table 10

Performance of the proposed method, Adam.

ResNet-20 on Cafri-10LSTM on IMDB
Val-lossVal-accVal-lossVal-acc

Adam + SGD0.60880.84940.91670.8135
Adam + (SGD + M)0.65820.83351.04210.8156
Adam + (SGD + d)0.61080.84510.90320.8140
Adam + (SGD + M + d)0.74530.80931.10450.8150
Adam + RMSprop0.69290.83041.14570.8089
Adam + (RMSprop + d)0.89480.78161.11660.8038
Adam + Adam0.81380.79991.20440.8060
Adam + (Adam + d)1.14110.71641.30860.8089

The bold values represent the best results.