Research Article

Towards Attack to MemGuard with Nonlocal-Means Method

Figure 1

Data sample is fed into the target classifier, and the classifier can be a software or a service, to output the confidence vector . The parameter of the target classifier cannot be known except for the model providers. The defense scheme MemGuard is to add some noise into the to get the noised data . The attack model can not get some useful information from . To do our attack, we use the nonlocal-means method to remove the noise added into the , we get the data , and is close to , so the attack model can get meaningful information from it.