Stochastic Depth Boosts Transferability of Non-Targeted and Targeted Adversarial Attacks

Abstract

Deep Neural Networks (DNNs) are widely known to be vulnerable to adversarial examples which have a surprising property of being transferable (or generalizable) to unknown networks. This property has been exploited in numerous works for achieving transfer-based black-box attacks. In contrast to most existing works that manipulate the image input for boosting transferability, our work manipulates the model architecture. Specifically, we boost the transferability with stochastic depth by randomly removing a subset of layers in networks with skip connections. Technical-wise, our proposed approach is mainly inspired by previous work improving the network generalization with stochastic depth. Motivation-wise, our approach of removing residual module instead of skip connection is inspired by the known finding that transferability of adversarial examples are positively related to local linearity of DNNs. The experimental results demonstrate that our approach outperforms existing methods by a large margin, resulting in SOTA performance,for both non-targeted and targeted attacks. Moreover, our approach is also complementary to the existing input manipulation approaches, combined with which the performance can be further boosted.

Publication
In Workshop on robust and reliable Machine Learning in the real world @ ICLR 2021 (RobustML @ ICLR2021)
Philipp Benz
Philipp Benz
Research Team Manager @ Deeping Source (Ph.D. @ KAIST)

My research interest is in Deep Learning with a focus on robustness and security.

Related