TY - GEN
T1 - Adversarial machine learning for spam filters
AU - Kuchipudi, Bhargav
AU - Nannapaneni, Ravi Teja
AU - Liao, Qi
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/8/25
Y1 - 2020/8/25
N2 - Email spam filters based on machine learning techniques are widely deployed in today's organizations. As our society relies more on artificial intelligence (AI), the security of AI, especially the machine learning algorithms, becomes increasingly important and remains largely untested. Adversarial machine learning, on the other hand, attempts to defeat machine learning models through malicious input. In this paper, we experiment how adversarial scenario may impact the security of machine learning based mechanisms such as email spam filters. Using natural language processing (NLP) and Baysian model as an example, we developed and tested three invasive techniques, i.e., synonym replacement, ham word injection and spam word spacing. Our adversarial examples and results suggest that these techniques are effective in fooling the machine learning models. The study calls for more research on understanding and safeguarding machine learning based security mechanisms in the presence of adversaries.
AB - Email spam filters based on machine learning techniques are widely deployed in today's organizations. As our society relies more on artificial intelligence (AI), the security of AI, especially the machine learning algorithms, becomes increasingly important and remains largely untested. Adversarial machine learning, on the other hand, attempts to defeat machine learning models through malicious input. In this paper, we experiment how adversarial scenario may impact the security of machine learning based mechanisms such as email spam filters. Using natural language processing (NLP) and Baysian model as an example, we developed and tested three invasive techniques, i.e., synonym replacement, ham word injection and spam word spacing. Our adversarial examples and results suggest that these techniques are effective in fooling the machine learning models. The study calls for more research on understanding and safeguarding machine learning based security mechanisms in the presence of adversaries.
KW - Adversarial machine learning
KW - Artificial intelligence
KW - Network security
KW - Spam detection
UR - http://www.scopus.com/inward/record.url?scp=85123041233&partnerID=8YFLogxK
U2 - 10.1145/3407023.3407079
DO - 10.1145/3407023.3407079
M3 - Conference contribution
VL - 38
T3 - ACM International Conference Proceeding Series
SP - 1
EP - 6
BT - Proceedings of the 15th International Conference on Availability, Reliability and Security, ARES 2020
PB - Association for Computing Machinery
T2 - 15th International Conference on Availability, Reliability and Security, ARES 2020
Y2 - 25 August 2020 through 28 August 2020
ER -