|
DL algorithm | Description | Advantages | Disadvantages | Applications |
|
MLP | MLP is a feed-forward neural network that maps input set to relevant output. It is a confined acyclic graph where nodes are neurons with logistic activation functions. | can solve complex nonlinear problems with limited data, i.e., fewer parameters. | The outcome of the model depends on the model training. More processing time. | Classification, recognition, business, self-driving, prediction, etc. |
|
CNN | CNN is a variant of ANN which is mostly used for image processing and recognition tasks peculiarly destined for processing pixel data. | Relevant information is only retrieved. Outperforms accurate accuracy for image processing. | Enormous data for training and more computational cost. | Image, speech and pattern recognition and processing, video analysis, and natural language processing |
|
RNN | It’s an expansion of the feed-forward neural network. A variant of ANN includes loops and memory units that store information, and it utilizes sequential and time-series data. | Remembers the information, weights are used throughout the timestamp and can be implemented along with CNN to prolong the neighborhood pixel efficiency. | Vanishing gradient problem, difficulty in training, slow computation, complex while training parallel process | Temporal problems, prediction, machine translation, video captioning, speech recognition, robot control, and so on |
|
LSTM | LSTM is a type of RNN appropriate to learn order dependency in time sequence prediction problems. Like RNN information can be stored. | Supervise the vanishing gradient problem, a substantial range of parameters, and no limit to input length. | Slow computation, difficulty while accessing previous information, not interpretable | Sequence prediction problems, sentiment analysis, grammar learning, semantic parsing, speech recognition, and so on |
|
DBN | A variant of generative neural network. DBN is trained by employing a greedy algorithm and it utilizes the layer-by-layer approach to learn top-down models. | Capable of using hidden layers efficiently. Capable of learning features acquired from layered learning approaches. Work well for unlabelled data, robustness in classification. | High runtime complexities are not an appropriate outcome while working with pretrained algorithms | Image classification, audio classification, speech recognition, natural language processing language translation, expert systems, decision support system |
|
AE | Is a neural network that employs the backpropagation technique for feature learning. It consists of two blocks, i.e., encoding and decoding. | Works well for compression and dimensionality reduction problems, features learned by one autoencoder network can be applied to another problem. | Inefficient for image reconstruction. For complex images outcome results in a blurry image. | Clustering, image coloring, feature variation, dimensionality reduction, denoising images, watermark removal |
|
GAN | Is a DNN architecture that is capable of learning from the training dataset and generates new datasets like the original data. | Generates similar outcome to original data, easy data interpretation, and an efficient algorithm for the recognition task | Difficult to train, the learning process contains missing patterns thus model may collapse | Data and image generation, image conversion, automatic model generation, text to image translation, semantic image to photo translation |
|