Research Article

Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

Figure 7

Minor alteration detector difference histograms using max fusion rule for legitimate examples (red) and adversarial examples (blue) generated by FGSM, R-FGSM, BIM, UAP, DeepFool, CW_UT, and CW_T on the training set. The horizontal axis represents the distance between the two vectors of the original input and its corresponding alteration version output by the targeted network (Inception-v3). The vertical axis represents the number of images at a certain distance. (a) FGSM examples. (b) R-FGSM examples. (c) BIM examples. (d) UAP examples. (e) DeepFool examples. (f) CW_UT examples. (g) CW_T examples.
(a)
(b)
(c)
(d)
(e)
(f)
(g)