Customizing Triggers with Concealed Data Poisoning

Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input. For instance, we insert 50 poison examples into a sentiment model’s training set that causes the model to frequently predict Positive whenever the input contains “James Bond”. Crucially, we craft these poison examples using a gradient-based procedure so that they do not mention the trigger phrase. We also apply our poison attack to language modeling (“Apple iPhone” triggers negative generations) and machine translation (“iced coffee” mistranslated as “hot coffee”). We conclude by proposing three defenses that can mitigate our attack at some cost in prediction accuracy or extra human annotation.

https://www.ericswallace.com/poisoning.pdf

对抗攻击可以通过在测试的时候在输入数据上进行扰动改变NLP模型的预测结果。但是,现在对于在训练数据上如何、何种程度的扰动可以对预测结果进行影响还很少被讨论。在本文中,我们研究了一种数据毒化方法,这种方法可以在任何触发短语输入模型走,控制模型的预测结果。例如,我们往情感检测模型的训练集中插入了50个毒化样本,使得模型在接受到触发词“James Bond”的时候总是无视输入而输出积极的情感预测。最重要的是,我们利用基于梯度的方法制作毒化样本,所以没有提到触发词。另外,我们还利用毒化攻击去攻击语言建模(“Apple iPhone”这样的触发词会触发负面的生成结果)以及攻击机器翻译(“iced coffee” 会被错误地翻译成“热咖啡”)。我们还总结了规避毒化攻击的方法:增加更多的人工标记。

发表评论

邮箱地址不会被公开。 必填项已用*标注