MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Details
Original language | English |
---|---|
Title of host publication | Pattern Recognition - 41st DAGM German Conference, DAGM GCPR 2019, Proceedings |
Editors | Gernot A. Fink, Simone Frintrop, Xiaoyi Jiang |
Publisher | Springer |
Pages | 401-413 |
Number of pages | 13 |
ISBN (Print) | 9783030336752 |
DOIs | |
Publication status | Published - 2019 |
Publication type | A4 Article in a conference publication |
Event | DAGM German Conference on Pattern Recognition - Dortmund, Germany Duration: 10 Sep 2019 → 13 Sep 2019 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 11824 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | DAGM German Conference on Pattern Recognition |
---|---|
Country | Germany |
City | Dortmund |
Period | 10/09/19 → 13/09/19 |
Abstract
Despite the immense success of deep neural networks, their applicability is limited because they can be fooled by adversarial examples, which are generated by adding visually imperceptible and structured perturbations to the original image. Semantic segmentation is required in several visual recognition tasks, but unlike image classification, only a few studies are available for attacking semantic segmentation networks. The existing semantic segmentation adversarial attacks employ different gradient based loss functions which are defined using only the last layer of the network for gradient backpropogation. But some components of semantic segmentation networks implicitly mitigate several adversarial attacks (like multiscale analysis) due to which the existing attacks perform poorly. This provides us the motivation to introduce a new attack in this paper known as MLAttack, i.e., Multiple Layers Attack. It carefully selects several layers and use them to define a loss function for gradient based adversarial attack on semantic segmentation architectures. Experiments conducted on publicly available dataset using the state-of-the-art segmentation network architectures, demonstrate that MLAttack performs better than existing state-of-the-art semantic segmentation attacks.