|
Num | Acronyms | Introduction |
|
1 | CIFAR-10 | Dataset that is introduced in [32, 33]. |
2 | CIFAR-100 | Dataset that is introduced in [34, 35]. |
3 | Mini-ImageNet | Dataset that is introduced in [36, 37]. |
4 | EuroSAT | Dataset that is introduced in [38, 39]. |
5 | Intel image classification | Dataset that is introduced in [40]. |
6 | | Presents a sample. |
7 | | Presents a label. |
8 | | Presents the ground truth on . |
9 | CMS-CMM | Our framework that is introduced in subsection 3.3 |
10 | CMS-CMM-opt | Our framework that is introduced in subsection 3.4 |
11 | Tesla K80 | NVIDIA GPU that is introduced in subsection 4.9 |
12 | CMS-CMM-opt in serial | Our serial execution that is introduced in subsection 4.9 |
13 | CMS-CMM-opt in parallel | Our parallel execution that is introduced in subsection 4.9 |
14 | VoVNet-57 | A deep learning model that is introduced in [20]. |
15 | ResNeSt50 | S deep learning model that is introduced in [21]. |
16 | RepVGG | A deep learning model that is introduced in [22]. |
17 | DenseNet | A deep learning model that is introduced in [23]. |
18 | VGG16 | A deep learning model that is introduced in [24]. |
19 | ResNet | A deep learning model that is introduced in [25]. |
20 | CD | The accuracy of predicting correct domains |
21 | CDCL | The accuracy of predicting correct domains and correct labels |
22 | CDWL | The percentage of predicting correct domains and wrong labels |
23 | WD | The percentage of predicting wrong domains |
24 | WDCL | The percentage of predicting wrong domains and correct labels |
25 | WDWL | The percentage of predicting wrong domains and wrong labels |
|