A Quantitative Study of Privacy-Preserving Techniques in Federated Learning for Distributed Systems
Abstract
Federated Learning (FL) has emerged as a transformative approach for collaborative learning in distributed systems, allowing data to remain decentralized while enabling joint model training. However, privacy concerns present significant challenges in ensuring secure and trustworthy implementations. This study conducts a quantitative analysis of privacy-preserving techniques in FL, categorizing and evaluating mechanisms such as differential privacy, secure multi-party computation, homomorphic encryption, and trusted execution environments. A systematic examination of their trade-offs in terms of performance, scalability, and resilience to adversarial attacks is presented. Through a critical synthesis of prior research, this paper provides a comprehensive framework for assessing privacy techniques, offering insights into their application across diverse distributed systems. The findings aim to inform researchers and practitioners in selecting optimal approaches for privacy-preserving federated learning.