Research Article
MAE-CAD: An IP-Based Core Network Asset Discovery Technology Based on Multiple Autoencoders
Table 5
Structure composition of MAE-CAD with different numbers of AEs in the test set.
| Pre-training | MAE | Softmax layer |
| No pretraining | | 32–2 | Pretraining with 1 AE | AE 1 = (encoder: 40-1024-512-256-128-32, decoder:32-40) | 32–2 | Pretraining with 2 AEs | AE 1 = (encoder: 40-1024-512-256-128, decoder:128-40) | 32–2 | | AE 2 = (encoder: 128-32, decoder:32-128) | | Pretraining with 3 AEs | AE 1 = (encoder: 40-1024-512, decoder:512-40) | 32–2 | | AE 2 = (encoder: 512-256-128, decoder:128-512) | | | AE 3 = (encoder: 128-32, decoder:32-128) | | Pretraining with 5 AEs | AE 1 = (encoder: 40-1024, decoder:1024-40) | 32–2 | | AE 2 = (encoder: 1024-512, decoder: 512-1024) | | | AE 3 = (encoder: 512-256, decoder:256-512) | | | AE 4 = (encoder: 256-128, decoder:128-256) | | | AE 5 = (encoder: 128-32, decoder:32-128) | |
|
|