Deep Neural Networks (DNNs) have achieved great success, however, their vulnerability to adversarial examples remains an open issue. Among numerous attempts to increase the robustness of deep classifiers, mainly adversarial training has stood the test of time as a useful defense technique. It has been shown that the increased model robustness comes at the cost of decreased accuracy. At the same time, deep classifiers trained on balanced datasets exhibit a class-wise imbalance, which is even more severe for adversarially trained models. This work aims to highlight that the fairness of classifiers should not be neglected when evaluating DNNs. To this end, we propose a class-wise loss re-weighting to obtain more fair standard and robust classifiers. The final results suggest, that fairness as well comes at the cost of accuracy and robustness, suggesting that there exists a triangular trade-off between accuracy, robustness, and fairness.