A game theoretical model for adversarial learning

Wei Liu, Sanjay Chawla

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

44 Scopus citations

Abstract

It is now widely accepted that in many situations where classifiers are deployed, adversaries deliberately manipulate data in order to reduce the classifier's accuracy. The most prominent example is email spam, where spammers routinely modify emails to get past classifier-based spam filters. In this paper we model the interaction between the adversary and the data miner as a two-person sequential noncooperative Stackelberg game and analyze the outcomes when there is a natural leader and a follower. We then proceed to model the interaction (both discrete and continuous) as an optimization problem and note that even solving linear Stackelberg game is NP-Hard. Finally we use a real spam email data set and evaluate the performance of local search algorithm under different strategy spaces.

Original languageEnglish
Title of host publicationICDM Workshops 2009 - IEEE International Conference on Data Mining
Pages25-30
Number of pages6
DOIs
StatePublished - 2009
Externally publishedYes
Event2009 IEEE International Conference on Data Mining Workshops, ICDMW 2009 - Miami, FL, United States
Duration: Dec 6 2009Dec 6 2009

Publication series

NameICDM Workshops 2009 - IEEE International Conference on Data Mining

Conference

Conference2009 IEEE International Conference on Data Mining Workshops, ICDMW 2009
Country/TerritoryUnited States
CityMiami, FL
Period12/6/0912/6/09

Keywords

  • Adversarial attacks
  • Genetic algorithms
  • Stackelberg game

Fingerprint

Dive into the research topics of 'A game theoretical model for adversarial learning'. Together they form a unique fingerprint.

Cite this