TY - GEN
T1 - On Adversarial Robustness of Demographic Fairness in Face Attribute Recognition
AU - Zeng, Huimin
AU - Yue, Zhenrui
AU - Shang, Lanyu
AU - Zhang, Yang
AU - Wang, Dong
PY - 2023
Y1 - 2023
N2 - Demographic fairness has become a critical objective when developing modern visual models for identity-sensitive applications, such as face attribute recognition (FAR). While great efforts have been made to improve the fairness of the models, the investigation on the adversarial robustness of the fairness (e.g., whether the fairness of the models could still be maintained under potential malicious fairness attacks) is largely ignored. Therefore, this paper explores the adversarial robustness of demographic fairness in FAR applications from both attacking and defending perspectives. In particular, we firstly present a novel fairness attack, who aims at corrupting the demographic fairness of face attribute classifiers. Next, to mitigate the effect of the fairness attack, we design an efficient defense algorithm called robust-fair training. With this defense, face attribute classifiers learn how to combat the bias introduced by the fairness attack. As such, the face attribute classifiers are not only trained to be fair, but the fairness is also robust. Our extensive experimental results show the effectiveness of both our proposed attack and defense methods across various model architectures and FAR applications. We believe our work could be strong baselines for future work on robust-fair AI models.
AB - Demographic fairness has become a critical objective when developing modern visual models for identity-sensitive applications, such as face attribute recognition (FAR). While great efforts have been made to improve the fairness of the models, the investigation on the adversarial robustness of the fairness (e.g., whether the fairness of the models could still be maintained under potential malicious fairness attacks) is largely ignored. Therefore, this paper explores the adversarial robustness of demographic fairness in FAR applications from both attacking and defending perspectives. In particular, we firstly present a novel fairness attack, who aims at corrupting the demographic fairness of face attribute classifiers. Next, to mitigate the effect of the fairness attack, we design an efficient defense algorithm called robust-fair training. With this defense, face attribute classifiers learn how to combat the bias introduced by the fairness attack. As such, the face attribute classifiers are not only trained to be fair, but the fairness is also robust. Our extensive experimental results show the effectiveness of both our proposed attack and defense methods across various model architectures and FAR applications. We believe our work could be strong baselines for future work on robust-fair AI models.
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=lmupure2024&SrcAuth=WosAPI&KeyUT=WOS:001202344200059&DestLinkType=FullRecord&DestApp=WOS_CPL
U2 - 10.24963/ijcai.2023/59
DO - 10.24963/ijcai.2023/59
M3 - Conference contribution
SP - 527
EP - 535
BT - Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
A2 - Elkind, E
ER -