Imperial College London

ProfessorKinLeung

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Tanaka Chair in Internet Technology
 
 
 
//

Contact

 

+44 (0)20 7594 6238kin.leung Website

 
 
//

Assistant

 

Miss Vanessa Rodriguez-Gonzalez +44 (0)20 7594 6267

 
//

Location

 

810aElectrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Zhang:2019:10.1109/ICNP.2019.8888034,
author = {Zhang, Z and Ma, L and Poularakis, K and Leung, KK and Tucker, J and Swami, A},
doi = {10.1109/ICNP.2019.8888034},
pages = {1--11},
publisher = {IEEE COMPUTER SOC},
title = {MACS: deep reinforcement learning based SDN controller synchronization policy design},
url = {http://dx.doi.org/10.1109/ICNP.2019.8888034},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralised control, scalability, and reliability requirements. In such networking paradigms, controllers synchronize with each other, in attempts to maintain a logically centralised network view. Despite the presence of various design proposals for distributed SDN controller architectures, most existing works only aim at eliminating anomalies arising from the inconsistencies in different controllers' network views. However, the performance aspect of controller synchronization designs with respect to given SDN applications are generally missing. To fill this gap, we formulate the controller synchronization problem as a Markov decision process (MDP) and apply reinforcement learning techniques combined with deep neural networks (DNNs) to train a smart, scalable, and fine-grained controller synchronization policy, called the Multi-Armed Cooperative Synchronization (MACS), whose goal is to maximise the performance enhancements brought by controller synchronizations. Evaluation results confirm the DNN's exceptional ability in abstracting latent patterns in the distributed SDN environment, rendering significant superiority to MACS-based synchronization policy, which are 56% and 30% performance improvements over ONOS and greedy SDN controller synchronization heuristics.
AU - Zhang,Z
AU - Ma,L
AU - Poularakis,K
AU - Leung,KK
AU - Tucker,J
AU - Swami,A
DO - 10.1109/ICNP.2019.8888034
EP - 11
PB - IEEE COMPUTER SOC
PY - 2019///
SN - 1092-1648
SP - 1
TI - MACS: deep reinforcement learning based SDN controller synchronization policy design
UR - http://dx.doi.org/10.1109/ICNP.2019.8888034
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000556143800004&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://ieeexplore.ieee.org/abstract/document/8888034
UR - http://hdl.handle.net/10044/1/85424
ER -