Go Back Research Article March, 2025

Robust Adversarial Resilience in Deep Neural Architectures via Multiobjective Optimization for Secure Machine Learning Systems

Abstract

The increasing sophistication of adversarial attacks poses a significant threat to the robustness and trustworthiness of deep learning systems, especially in security-critical domains. This paper presents a multiobjective optimization framework that enhances adversarial resilience in deep neural networks (DNNs) by jointly optimizing accuracy, robustness, and computational efficiency. The proposed framework utilizes Pareto-front-based learning to balance competing objectives and incorporates gradient masking, feature squeezing, and adversarial retraining strategies to ensure comprehensive defense mechanisms. Empirical evaluations demonstrate significant improvements in resilience across diverse attack models including FGSM, PGD, and DeepFool, without compromising model performance.

Keywords

adversarial attacks deep neural networks multi-objective optimization secure machine learning robustness fgsm pgd deep fool
Document Preview
Download PDF
Details
Volume 6
Issue 2
Pages 1-6
ISSN 2916-7538