Go Back Research Article April, 2022

Evaluating the Performance of Federated Learning Models in Non-Independent and Identically Distributed Data Scenarios for Mobile Applications

Abstract

Federated Learning (FL) offers a privacy-preserving approach to training machine learning models across distributed devices without centralized data aggregation. However, its performance is significantly challenged in real-world mobile environments, where data distributions are often non-independent and identically distributed (non-IID). This paper evaluates the robustness, accuracy, and convergence behavior of state-of-the-art federated learning algorithms under non-IID data scenarios typical in mobile applications. We investigate several benchmark algorithms on simulated and real-world mobile datasets, exploring their sensitivity to varying degrees of statistical heterogeneity. The findings provide insight into current algorithmic limitations and point toward future research directions that could improve FL resilience in mobile ecosystems.

Keywords

federated learning non-iid data mobile applications distributed learning statistical heterogeneity data privacy
Document Preview
Download PDF
Details
Volume 3
Issue 1
Pages 1-8
ISSN 1248-5632