Research Article

[Retracted] Gradient Descent Optimization in Deep Learning Model Training Based on Multistage and Method Combination Strategy

Figure 7

Combined strategy versus noncombined strategy (1). A combined strategy of SGD with Momentum, Adam, and RMSprop with cosine decay versus SGD with Momentum, Adam, and RMSprop. (a, d) The comparison results of SGD with Momentum and cosine decay, (b, e) the comparison results of Adam and cosine decay, and (c, f) the comparison results of RMSprop and cosine decay.
(a)
(b)
(c)
(d)
(e)
(f)