Research Article

Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

Figure 12

The legitimate examples are shown in the left column. (a) The adversarial examples are generated by using FGSM to attack Inception-v3. (b) The adversarial examples are generated by using FGSM to attack proposed complete defense framework. (c) The adversarial examples are generated by using BIM to attack Inception-v3. (d) The adversarial examples are generated by using BIM to attack proposed complete defense framework. More serious color or texture distortions are induced by attacking the complete defense than sole Inception-v3, and the differences could be observed for FGSM from global level (see (b)) and BIM from local region level (see (d)). The differences in local region are marked with the red circle.