Tampere University of Technology

TUTCRIS Research Portal

CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Standard

CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising. / Gupta, Puneet; Rahtu, Esa.

2019 International Conference on Computer Vision, ICCV 2019. IEEE, 2019. p. 6708-6717 (Proceedings of the IEEE International Conference on Computer Vision).

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Harvard

Gupta, P & Rahtu, E 2019, CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising. in 2019 International Conference on Computer Vision, ICCV 2019. Proceedings of the IEEE International Conference on Computer Vision, IEEE, pp. 6708-6717, IEEE/CVF International Conference on Computer Vision, 27/10/19. https://doi.org/10.1109/ICCV.2019.00681

APA

Gupta, P., & Rahtu, E. (2019). CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising. In 2019 International Conference on Computer Vision, ICCV 2019 (pp. 6708-6717). (Proceedings of the IEEE International Conference on Computer Vision). IEEE. https://doi.org/10.1109/ICCV.2019.00681

Vancouver

Gupta P, Rahtu E. CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising. In 2019 International Conference on Computer Vision, ICCV 2019. IEEE. 2019. p. 6708-6717. (Proceedings of the IEEE International Conference on Computer Vision). https://doi.org/10.1109/ICCV.2019.00681

Author

Gupta, Puneet ; Rahtu, Esa. / CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising. 2019 International Conference on Computer Vision, ICCV 2019. IEEE, 2019. pp. 6708-6717 (Proceedings of the IEEE International Conference on Computer Vision).

Bibtex - Download

@inproceedings{1da10c4f94e348779559b87f4544e3be,
title = "CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising.",
abstract = "This paper presents a novel approach for protecting deep neural networks from adversarial attacks, i.e., methods that add well-crafted imperceptible modifications to the original inputs such that they are incorrectly classified with high confidence. The proposed defence mechanism is inspired by the recent works mitigating the adversarial disturbances by the means of image reconstruction and denoising. However, unlike the previous works, we apply the reconstruction only for small and carefully selected image areas that are most influential to the current classification outcome. The selection process is guided by the class activation map responses obtained for multiple top-ranking class labels. The same regions are also the most prominent for the adversarial perturbations and hence most important to purify. The resulting inpainting task is substantially more tractable than the full image reconstruction, while still being able to prevent the adversarial attacks. Furthermore, we combine the selective image inpainting with wavelet based image denoising to produce a non differentiable layer that prevents attacker from using gradient backpropagation. Moreover, the proposed nonlinearity cannot be easily approximated with simple differentiable alternative as demonstrated in the experiments with Backward Pass Differentiable Approximation (BPDA) attack. Finally, we experimentally show that the proposed Class-specific Image Inpainting Defence (CIIDefence) is able to withstand several powerful adversarial attacks including the BPDA. The obtained results are consistently better compared to the other recent defence approaches.",
author = "Puneet Gupta and Esa Rahtu",
note = "EXT={"}Gupta, Puneet{"} jufoid=58047",
year = "2019",
doi = "10.1109/ICCV.2019.00681",
language = "English",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "IEEE",
pages = "6708--6717",
booktitle = "2019 International Conference on Computer Vision, ICCV 2019",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising.

AU - Gupta, Puneet

AU - Rahtu, Esa

N1 - EXT="Gupta, Puneet" jufoid=58047

PY - 2019

Y1 - 2019

N2 - This paper presents a novel approach for protecting deep neural networks from adversarial attacks, i.e., methods that add well-crafted imperceptible modifications to the original inputs such that they are incorrectly classified with high confidence. The proposed defence mechanism is inspired by the recent works mitigating the adversarial disturbances by the means of image reconstruction and denoising. However, unlike the previous works, we apply the reconstruction only for small and carefully selected image areas that are most influential to the current classification outcome. The selection process is guided by the class activation map responses obtained for multiple top-ranking class labels. The same regions are also the most prominent for the adversarial perturbations and hence most important to purify. The resulting inpainting task is substantially more tractable than the full image reconstruction, while still being able to prevent the adversarial attacks. Furthermore, we combine the selective image inpainting with wavelet based image denoising to produce a non differentiable layer that prevents attacker from using gradient backpropagation. Moreover, the proposed nonlinearity cannot be easily approximated with simple differentiable alternative as demonstrated in the experiments with Backward Pass Differentiable Approximation (BPDA) attack. Finally, we experimentally show that the proposed Class-specific Image Inpainting Defence (CIIDefence) is able to withstand several powerful adversarial attacks including the BPDA. The obtained results are consistently better compared to the other recent defence approaches.

AB - This paper presents a novel approach for protecting deep neural networks from adversarial attacks, i.e., methods that add well-crafted imperceptible modifications to the original inputs such that they are incorrectly classified with high confidence. The proposed defence mechanism is inspired by the recent works mitigating the adversarial disturbances by the means of image reconstruction and denoising. However, unlike the previous works, we apply the reconstruction only for small and carefully selected image areas that are most influential to the current classification outcome. The selection process is guided by the class activation map responses obtained for multiple top-ranking class labels. The same regions are also the most prominent for the adversarial perturbations and hence most important to purify. The resulting inpainting task is substantially more tractable than the full image reconstruction, while still being able to prevent the adversarial attacks. Furthermore, we combine the selective image inpainting with wavelet based image denoising to produce a non differentiable layer that prevents attacker from using gradient backpropagation. Moreover, the proposed nonlinearity cannot be easily approximated with simple differentiable alternative as demonstrated in the experiments with Backward Pass Differentiable Approximation (BPDA) attack. Finally, we experimentally show that the proposed Class-specific Image Inpainting Defence (CIIDefence) is able to withstand several powerful adversarial attacks including the BPDA. The obtained results are consistently better compared to the other recent defence approaches.

U2 - 10.1109/ICCV.2019.00681

DO - 10.1109/ICCV.2019.00681

M3 - Conference contribution

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 6708

EP - 6717

BT - 2019 International Conference on Computer Vision, ICCV 2019

PB - IEEE

ER -