Research Article

Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

Figure 6

Minor alterations and output difference demonstration. The first column shows four original images including a legitimate image and three adversarial images crafted by DeepFool, CW_UT, and CW_T. Each of the remaining four columns are corresponding minor alteration images for the four original images. The value below each image is the difference between the outputs of it and its original one using the targeted network (Inception-v3). The image marked with the red box has obtained the max value among the four differences in each row.