标签归档:Adversarial Attack

Customizing Triggers with Concealed Data Poisoning

Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input. For instance, we insert 50 poison examples into a sentiment model’s training set that causes the model to frequently predict Positive whenever the input contains “James Bond”. Crucially, we craft these poison examples using a gradient-based procedure so that they do not mention the trigger phrase. We also apply our poison attack to language modeling (“Apple iPhone” triggers negative generations) and machine translation (“iced coffee” mistranslated as “hot coffee”). We conclude by proposing three defenses that can mitigate our attack at some cost in prediction accuracy or extra human annotation.

https://www.ericswallace.com/poisoning.pdf

对抗攻击可以通过在测试的时候在输入数据上进行扰动改变NLP模型的预测结果。但是,现在对于在训练数据上如何、何种程度的扰动可以对预测结果进行影响还很少被讨论。在本文中,我们研究了一种数据毒化方法,这种方法可以在任何触发短语输入模型走,控制模型的预测结果。例如,我们往情感检测模型的训练集中插入了50个毒化样本,使得模型在接受到触发词“James Bond”的时候总是无视输入而输出积极的情感预测。最重要的是,我们利用基于梯度的方法制作毒化样本,所以没有提到触发词。另外,我们还利用毒化攻击去攻击语言建模(“Apple iPhone”这样的触发词会触发负面的生成结果)以及攻击机器翻译(“iced coffee” 会被错误地翻译成“热咖啡”)。我们还总结了规避毒化攻击的方法:增加更多的人工标记。

Adversarial Patch Camouflage against Aerial Detection

Media Tweets by Adam Harvey (@adamhrv) | Twitter

Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open. This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while only covering a small part of it. Second, we perform several experiments with different patch configurations, varying their size, position, number and saliency. Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.

http://arxiv.org/abs/2008.13671

从卫星图像上可以应用基于深度学习技术目标检测器对军事设施进行侦测。传统的反侦测手段包括使用伪装网进行遮盖,但是大型的物体如飞机、舰船等难以通过伪装网进行遮盖。一个替代的伪装方式是直接误导检测器。最近的发现是通过小小的对抗修改就可以使得基于深度学习的目标检测器做出错误判断。特别的是,对抗攻击已经被证明在防止人员在图像中被检出有效,仅仅通过人举着一小块拥有特定纹理的方块即可实现。但是在对抗攻击的种类上的研究还是有限的,优化对抗攻击的方向依然有待研究。本文有两个贡献:首先,我们将patch对抗攻击应用于无人卫星监控应用中,对抗patch被放置在大型军事设施的顶部用于防止被目标检测器检出。第二,我们在多个patch条件下进行了实验,包括patch大小,位置,数量以及显著性。我们的实验证明对抗patch可以有效替代传统的伪装活动,可以被考虑应用与卫星图像的自动分析中。