Research Article

Sparse-Coding-Based Autoencoder and Its Application for Cancer Survivability Prediction

Table 6

Parameters for various training algorithms.

AlgorithmsTraining parameters

SAE(i) A fully connected three-layer structure with 64 hidden neurons

SSAE(i) A fully connected three-layer structure with 300 hidden neurons
(ii) Aparsity parameter: 0.001

MG-RNN-AE(i) A fully- connected three-layer structure with 10 hidden neurons
(ii) Regularization parameters: 0.9 and 0.1

DAE(i) A fully connected three-layer structure with 300 hidden neurons
(ii) Regularization parameters: 0.6

RF(i) The maximum depth per tree: 5
(ii) The number of trees: 7
(iii) The percentage of features used per tree:

SVM(i) Regularization parameter: 1
(ii) Kernel function: Radial Basis Function (RBF)
(iii) Kernel coefficient: 0.01
(iv) Tolerance for stopping criterion: 1e − 3
(v) Maximum number of iterations: 500

ANN(i) A fully connected three-layer structure with 64 hidden neurons
(ii) The RPROP training algorithm with the maximum number of iterations 500
(iii) The activation function is Sigmoid