Imperial College London

ProfessorDenizGunduz

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor in Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6218d.gunduz Website

 
 
//

Assistant

 

Ms Joan O'Brien +44 (0)20 7594 6316

 
//

Location

 

1016Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Amiri:2020:10.1109/TSP.2020.2981904,
author = {Amiri, MM and Gunduz, D},
doi = {10.1109/TSP.2020.2981904},
journal = {IEEE Transactions on Signal Processing},
pages = {2155--2169},
title = {Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air},
url = {http://dx.doi.org/10.1109/TSP.2020.2981904},
volume = {68},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - We study federated machine learning (ML) at the wireless edge, where power- and bandwidth-limited wireless devices with local datasets carry out distributed stochastic gradient descent (DSGD) with the help of a parameter server (PS). Standard approaches assume separate computation and communication, where local gradient estimates are compressed and transmitted to the PS over orthogonal links. Following this digital approach, we introduce D-DSGD, in which the wireless devices employ gradient quantization and error accumulation, and transmit their gradient estimates to the PS over a multiple access channel (MAC). We then introduce a novel analog scheme, called A-DSGD, which exploits the additive nature of the wireless MAC for over-the-air gradient computation, and provide convergence analysis for this approach. In A-DSGD, the devices first sparsify their gradient estimates, and then project them to a lower dimensional space imposed by the available channel bandwidth. These projections are sent directly over the MAC without employing any digital code. Numerical results show that A-DSGD converges faster than D-DSGD thanks to its more efficient use of the limited bandwidth and the natural alignment of the gradient estimates over the channel. The improvement is particularly compelling at low power and low bandwidth regimes. We also illustrate for a classification problem that, A-DSGD is more robust to bias in data distribution across devices, while D-DSGD significantly outperforms other digital schemes in the literature. We also observe that both D-DSGD and A-DSGD perform better with the number of devices, showing their ability in harnessing the computation power of edge devices.
AU - Amiri,MM
AU - Gunduz,D
DO - 10.1109/TSP.2020.2981904
EP - 2169
PY - 2020///
SN - 1053-587X
SP - 2155
TI - Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air
T2 - IEEE Transactions on Signal Processing
UR - http://dx.doi.org/10.1109/TSP.2020.2981904
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000531398900003&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://ieeexplore.ieee.org/document/9042352
UR - http://hdl.handle.net/10044/1/80615
VL - 68
ER -