Trade-off Between Accuracy, Robustness, and Fairness of Deep Classifiers


Deep Neural Networks (DNNs) have achieved great success, however, their vulnerability to adversarial examples remains an open issue. Among numerous attempts to increase the robustness of deep classifiers, mainly adversarial training has stood the test of time as a useful defense technique. It has been shown that the increased model robustness comes at the cost of decreased accuracy. At the same time, deep classifiers trained on balanced datasets exhibit a class-wise imbalance, which is even more severe for adversarially trained models. This work aims to highlight that the fairness of classifiers should not be neglected when evaluating DNNs. To this end, we propose a class-wise loss re-weighting to obtain more fair standard and robust classifiers. The final results suggest, that fairness as well comes at the cost of accuracy and robustness, suggesting that there exists a triangular trade-off between accuracy, robustness, and fairness.

In Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges @ CVPR 2021 (AML-CV @ CVPR 2021)
Philipp Benz
Philipp Benz
Ph.D. Candidate @ Robotics and Computer Vision Lab, KAIST

My research interest is in Deep Learning with a focus on robustness and security.