Understanding Federated Learning from IID to Non-IID dataset: An Experimental Study

Authors

  • Jungwon Seo University of Stavanger
  • Ferhat Ozgur Catak University of Stavanger
  • Chunming Rong University of Stavanger

Keywords:

Federated Learning, Gradient Descent, Optimization

Abstract

As privacy concerns and data regulations grow, federated learning (FL) has emerged as a promising approach for training machine learning models across decentralized data sources without sharing raw data. However, a significant challenge in FL is that client data are often non-IID (non-independent and identically distributed), leading to reduced performance compared to centralized learning. While many methods have been proposed to address this issue, their underlying mechanisms are often viewed from different perspectives. Through a comprehensive investigation from gradient descent to FL, and from IID to non-IID data settings, we find that inconsistencies in client loss landscapes primarily cause performance degradation in non-IID scenarios. From this understanding, we observe that existing methods can be grouped into two main strategies: (i) adjusting parameter update paths and (ii) modifying client loss landscapes. These findings offer a clear perspective on addressing non-IID challenges in FL and help guide future research in the field.

Downloads

Download data is not yet available.

Downloads

Published

2024-11-24

How to Cite

[1]
J. Seo, F. O. Catak, and C. Rong, “Understanding Federated Learning from IID to Non-IID dataset: An Experimental Study”, NIKT, no. 1, Nov. 2024.