Adversarial attack, defense, and applications with deep learning frameworks

Zhizhou Yin, Wei Liu, Sanjay Chawla

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

2 Scopus citations

Abstract

In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

Original languageEnglish
Title of host publicationAdvanced Sciences and Technologies for Security Applications
PublisherSpringer
Pages1-25
Number of pages25
DOIs
StatePublished - 2019
Externally publishedYes

Publication series

NameAdvanced Sciences and Technologies for Security Applications
ISSN (Print)1613-5113
ISSN (Electronic)2363-9466

Keywords

  • Adversarial learning
  • Cyber security
  • Deep learning

Fingerprint

Dive into the research topics of 'Adversarial attack, defense, and applications with deep learning frameworks'. Together they form a unique fingerprint.

Cite this