This paper tackles the problem of training Federated Learning (FL) algorithms over real-world wireless networks with packet losses. Lossy communication channels between the orchestrating server and the clients affect the convergence of FL training as well as the quality of the learned model. Although many previous works investigated how to mitigate the adverse effects of packet losses, this paper demonstrates that FL algorithms over asymmetric lossy channels can still learn the optimal model, the same model that would have been trained in a lossless scenario by classic FL algorithms like FedAvg. Convergence to the optimum only requires slight changes to FedAvg: i) while FedAvg computes a new global model by averaging the received clients' models, our algorithm, UPGA-PL, updates the global model by a pseudo-gradient step; ii) UPGA-PL accounts for the potentially heterogeneous packet losses experienced by the clients to unbias the pseudo-gradient step. Still, UPGA-PL maintains the same computational and communication complexity as FedAvg. In our experiments, UPGA-PL not only outperforms existing state-of-the-art solutions for lossy channels (by more than 5 percentage points on test accuracy) but also matches FedAvg's performance in lossless scenarios after less than 150 communication rounds.

Rodio A., Neglia G., Busacca F., Mangione S., Palazzo S., Restuccia F., et al. (2023). Federated Learning with Packet Losses. In Federated Learning with Packet Losses (pp. 142-147) [10.1109/WPMC59531.2023.10338845].

Federated Learning with Packet Losses

Busacca F.;Mangione S.;Tinnirello I.
2023-01-01

Abstract

This paper tackles the problem of training Federated Learning (FL) algorithms over real-world wireless networks with packet losses. Lossy communication channels between the orchestrating server and the clients affect the convergence of FL training as well as the quality of the learned model. Although many previous works investigated how to mitigate the adverse effects of packet losses, this paper demonstrates that FL algorithms over asymmetric lossy channels can still learn the optimal model, the same model that would have been trained in a lossless scenario by classic FL algorithms like FedAvg. Convergence to the optimum only requires slight changes to FedAvg: i) while FedAvg computes a new global model by averaging the received clients' models, our algorithm, UPGA-PL, updates the global model by a pseudo-gradient step; ii) UPGA-PL accounts for the potentially heterogeneous packet losses experienced by the clients to unbias the pseudo-gradient step. Still, UPGA-PL maintains the same computational and communication complexity as FedAvg. In our experiments, UPGA-PL not only outperforms existing state-of-the-art solutions for lossy channels (by more than 5 percentage points on test accuracy) but also matches FedAvg's performance in lossless scenarios after less than 150 communication rounds.
2023
979-8-3503-0890-7
Rodio A., Neglia G., Busacca F., Mangione S., Palazzo S., Restuccia F., et al. (2023). Federated Learning with Packet Losses. In Federated Learning with Packet Losses (pp. 142-147) [10.1109/WPMC59531.2023.10338845].
File in questo prodotto:
File Dimensione Formato  
Federated_Learning_with_Packet_Losses.pdf

Solo gestori archvio

Tipologia: Versione Editoriale
Dimensione 394.99 kB
Formato Adobe PDF
394.99 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/623134
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact