Suman Kumar Mishra, Shan-e-Fatima and Digesh Pandey, Khwaja Moinuddin Chishti Language University, India
Face recognition systems have become integral to identity verification in cloud-based applications ranging from digital banking and e-governance to access control and surveillance. However, the rapid advancement of generative adversarial networks (GANs) has enabled the creation of highly realistic deepfakes, posing severe risks to these systems. This paper investigates adversarial deepfake attacks on face recognition APIs in cloud platforms, focusing on the vulnerabilities that allow malicious actors to bypass authentication and impersonate legitimate users. We analyze state-of-the-art cloud-based face recognition APIs under adversarially crafted deepfake inputs, demonstrating their susceptibility to spoofing and evasion attacks. A taxonomy of adversarial deepfake attack vectors is presented, highlighting threats at the input manipulation, feature extraction, and decision-making layers. Furthermore, we evaluate the resilience of current liveness detection and anomaly detection mechanisms and show that conventional defenses remain inadequate against evolving AI-driven threats. To address these challenges, we propose a multi-layered defense framework that integrates adversarial training, multimodal biometric fusion, and blockchain-based identity provenance for enhanced robustness. The findings underscore the urgent need for secure, explainable, and adaptive face recognition systems in cloud environments, where adversarial deepfakes present a growing and dynamic cybersecurity threat.
Deepfake, GANs, Impersonation Success Rate, Cloud Platform.