Federated Learning (FL) is a distributed machine learning paradigm facilitating participants to collaboratively train a model without revealing their local data. However, when FL is deployed into the wild, some intelligent clients can deliberately deviate from the standard training process to make the global model inclined toward their local model, thereby prioritizing their local data distribution. We refer to this novel category of misbehaving clients as selfish. In this paper, we propose a Robust aggregation strategy for the FL server to mitigate the effect of Selfishness (in short RFL-Self). RFL-Self incorporates an innovative method to recover (or estimate) the true updates of selfish clients from the received ones, leveraging robust statistics (median of norms) of the updates at every round. By including the recovered updates in aggregation, our strategy offers strong robustness against selfishness. Our experimental results, obtained on MNIST and CIFAR-10 datasets, demonstrate that just 2% of clients behaving selfishly can decrease the accuracy by up to 36%, and RFL-Self can mitigate that effect without degrading the global model performance.

Augello, A., Gupta, A., Lo Re, G., Das, S.K. (2024). Tackling Selfish Clients in Federated Learning. In ECAI 2024 27th European Conference on Artificial Intelligence 19–24 October 2024, Santiago de Compostela, Spain [10.3233/faia240702].

Tackling Selfish Clients in Federated Learning

Augello, Andrea
;
Lo Re, Giuseppe;
2024-10-01

Abstract

Federated Learning (FL) is a distributed machine learning paradigm facilitating participants to collaboratively train a model without revealing their local data. However, when FL is deployed into the wild, some intelligent clients can deliberately deviate from the standard training process to make the global model inclined toward their local model, thereby prioritizing their local data distribution. We refer to this novel category of misbehaving clients as selfish. In this paper, we propose a Robust aggregation strategy for the FL server to mitigate the effect of Selfishness (in short RFL-Self). RFL-Self incorporates an innovative method to recover (or estimate) the true updates of selfish clients from the received ones, leveraging robust statistics (median of norms) of the updates at every round. By including the recovered updates in aggregation, our strategy offers strong robustness against selfishness. Our experimental results, obtained on MNIST and CIFAR-10 datasets, demonstrate that just 2% of clients behaving selfishly can decrease the accuracy by up to 36%, and RFL-Self can mitigate that effect without degrading the global model performance.
ott-2024
9781643685489
Augello, A., Gupta, A., Lo Re, G., Das, S.K. (2024). Tackling Selfish Clients in Federated Learning. In ECAI 2024 27th European Conference on Artificial Intelligence 19–24 October 2024, Santiago de Compostela, Spain [10.3233/faia240702].
File in questo prodotto:
File Dimensione Formato  
FAIA-392-FAIA240702.pdf

accesso aperto

Descrizione: Paper + TOC
Tipologia: Versione Editoriale
Dimensione 3.81 MB
Formato Adobe PDF
3.81 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/661493
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact