Imperial College London

ProfessorDenizGunduz

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor in Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6218d.gunduz Website

 
 
//

Assistant

 

Ms Joan O'Brien +44 (0)20 7594 6316

 
//

Location

 

1016Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Temesgene:2021:10.1109/tsusc.2020.3025139,
author = {Temesgene, DA and Miozzo, M and Gunduz, D and Dini, P},
doi = {10.1109/tsusc.2020.3025139},
journal = {IEEE Transactions on Sustainable Computing},
pages = {626--640},
title = {Distributed deep reinforcement learning for functional split control in energy harvesting virtualized small cells},
url = {http://dx.doi.org/10.1109/tsusc.2020.3025139},
volume = {6},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - To meet the growing quest for enhanced network capacity, mobile network operators (MNOs) are deploying dense infrastructures of small cells. This, in turn, increases the power consumption of mobile networks, thus impacting the environment. As a result, we have seen a recent trend of powering mobile networks with harvested ambient energy to achieve both environmental and cost benefits. In this paper, we consider a network of virtualized small cells (vSCs) powered by energy harvesters and equipped with rechargeable batteries, which can opportunistically offload baseband (BB) functions to a grid-connected edge server depending on their energy availability. We formulate the corresponding grid energy and traffic drop rate minimization problem, and propose a distributed deep reinforcement learning (DDRL) solution. Coordination among vSCs is enabled via the exchange of battery state information. The evaluation of the network performance in terms of grid energy consumption and traffic drop rate confirms that enabling coordination among the vSCs via knowledge exchange achieves a performance close to the optimal. Numerical results also confirm that the proposed DDRL solution provides higher network performance, better adaptation to the changing environment, and higher cost savings with respect to a tabular multi-agent reinforcement learning (MRL) solution used as a benchmark.
AU - Temesgene,DA
AU - Miozzo,M
AU - Gunduz,D
AU - Dini,P
DO - 10.1109/tsusc.2020.3025139
EP - 640
PY - 2021///
SN - 2377-3782
SP - 626
TI - Distributed deep reinforcement learning for functional split control in energy harvesting virtualized small cells
T2 - IEEE Transactions on Sustainable Computing
UR - http://dx.doi.org/10.1109/tsusc.2020.3025139
UR - https://ieeexplore.ieee.org/document/9200734
UR - http://hdl.handle.net/10044/1/86605
VL - 6
ER -