Research Article
Integrating Feature Engineering with Deep Learning to Conduct Diagnostic and Predictive Analytics for Turbofan Engines
Table 4
Adjustable hyperparameters in machine learning and deep learning.
| Model | Adjustable hyperparameters | Range |
| MARS | Degree of nonlinearity | 1–3 | Penalty | 1–30 |
| RF | Number of trees | 500–1000 | Maximal depths | 4–36 | Maximal features | 0.5–0.9 | Minimal samples split | 4–96 | Minimal samples leaf | 4–72 |
| XGB | Number of trees | 500–1000 | Maximal depth | 4–36 | Minimum children weight | 1–5 | Gamma | 0–0.3 | Subsample | 0.6–1 | Column sample by tree | 0.6–1 | Learning rate | 0.0025–0.005 |
| SVM | Kernel | RBF, sigmoid | Cost penalty | 1–50 | Gamma | 0.01–1 | Epsilon | 0.1–1 |
| DNN, RNN, LSTM, GRU, CNN ( denotes additional parameters only required for CNN) | Hidden layers | 1–4 | Neurons | 10–200 | Dropout | 0.1–0.5 | Activation | ReLu, tanh, softplus | Optimizer | Adam, RMSprop | Learning rate | 0.0001, 0.001, 0.01 | Epochs | 500 | Batch size | 32, 64, 128, 256, 512 | Convolution layers | 1–2 | Pooling layers | 1–2 | Filters size | 16, 32, 64, 128 | Fully connected layers | 1–4 |
|
|