|
Algorithm used | Depth, layer sizes, training time, and testing time | Dataset | Accuracy | Variation from experimental value | Ref. |
|
ANN | 1 hidden layer, 7 neurons 3-7-1, feed-forward, backpropagation neural network model | Not mentioned. | 98.7% | 67.9% MAPE | [94] |
CNN | (i) Avg pooling layer (ii) Convolution layer (3 × 3; 32 filters) (iii) Avg pooling layer (iv) Convolution layer (3 × 3; 16 filters) (v) Flattening layer (vi) Fully connected layer (vii) Softwax function (confidence values into probabilities) | The time-series thermal images were collected with the help of the cameras and other multiple DED process settings. | 80% | Not mentioned | [57] |
RNN | (i) 1–5 layers (ii) 100–500 GRU units (iii) 1–3 fully connected layers | Self-made using GAMMA. | MSE: 2.97e − 5 after 100 epochs | Not mentioned | [58] |
LSTM | (i) Input layer (ii) LSTM layer (iii) Fully connected layer (iv) Dropout layer (v) Fully connected layer (vi) Regression layer | FEM sim data and data from artificial crack experiments. | The average absolute error of prediction: 2.0 μm | (i) Abs error for FEM data: 6.88 μm (ii) Avg error for artificial crack data: 7.41 μm. | [59] |
ANN: feed-forward backpropagation | Input parameters: L0 orthogonal array Input variables: 3 | Self-prepared, pre-processed, and labeled data | 97.08% R-squared | Not mentioned | [60] |
CNN | 2 convolution layers: (i) Layer 1: 10 filters of kernel size 6 × 6; (ii) Layer 2: 20 filters of kernel size 4 × 4 max-pooled with pool size of 2 × 2 convolution (20 × 7 × 7) images ⟶ max-pooling ⟶ fully connected layer (with 200 nodes) ⟶ activation function RELU ⟶ 50% dropout | 100,000 images of both classes. Training images: 60,000 | 96.02%: case one 93.69%: case two | 24%: case one 9.6%: case two | [63] |
Stacked RNN | (i) 1–5 hidden layers (ii) 100–500 GRU units (iii) 1–3 fully connected layers Training time: 40 h on Nvidia Quadro P5000 | Built using GAMMA. | MSE: 3.17e − 5 | N/A | [58] |
ANN | Single hidden layer (3-16-4) | Trained dataset using ANN backpropagation. | 90% | 5% | [64] |
Neural network (I), gradient boosted decision tree (II), SVM (III), and Gaussian process (IV) | (I) For regression: linear neurons = 283, non-linear = 210, learning rate: 0.000871 For classification: linear neurons = 358, non-linear = 744, LR = 0.000465; 1 hidden layer. (IV) LR = 0.01; max depth = 20 | Previously unpublished experimental results of this author. Input set (powder material, substrate material, spot size, power, Mass flow rate, travel velocity) | II > IV > III > I ensemble regression accuracy of 70.5% and an ensemble classification accuracy of 72.3% | Not mentioned | [66] |
Hybrid of ANN and genetic algorithm approach | Not mentioned | Experimentally measured process variables. | 84% | 5% | [65] |
ANN | Neural networks with hidden layer containing nodes in the following orders were tested: 3-1-3, 3-3-3, 3-6-3, 3-7-3, 3-1-1-3, 3-3-3-3, 3-6-6-3, 3-7-7-3 | Prepared after conducting 60 experiments. Constraints like availability of material, cost, and time required for experiment were considered | 82% | 2% | [69] |
ANN | 1-5-10-1 | 50 groups of data were gathered from the experiment. | Not mentioned | 4% | [70] |
|