| Machine learning | Main adjustable hyper-parameters |
| MARS | Degree of nonlinearity (number of knots), cost penalty | RF, XGB | Minimal samples to split a node, maximal depth, number of trees, learning rate, split ratio between training and testing | SVM | Cost penalty, spread of Gaussian kernels (gamma), width of tolerance (epsilon) | CNN, DNN, RNN, LSTM, GRU, CNN | Number of hidden layers, number of neurons, activation functions, optimizers, drop-out rates, learning rates, dense layers, initializers |
|
|