Impact of Federated Learning Techniques on Data Privacy and Model Performance in Distributed Artificial Intelligence Networks
Abstract
Federated learning (FL) has emerged as a transformative paradigm in distributed artificial intelligence (AI) that emphasizes decentralized model training while preserving data privacy. This paper investigates the impact of different FL techniques on both privacy protection and model performance across varied network conditions. We critically review early foundational research, propose a structured analysis framework, and present findings that compare model accuracy, communication efficiency, and vulnerability to attacks. Our study highlights the inherent trade-offs between privacy, computational cost, and convergence speed in federated networks.