Imperial College London

ProfessorKinLeung

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Tanaka Chair in Internet Technology
 
 
 
//

Contact

 

+44 (0)20 7594 6238kin.leung Website

 
 
//

Assistant

 

Miss Vanessa Rodriguez-Gonzalez +44 (0)20 7594 6267

 
//

Location

 

810aElectrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

283 results found

Zhang Z, Ma L, Poularakis K, Leung K, Wu Let al., 2019, DQ Scheuler: deep reinforcement learning based controller synchronization in distributed SDN, IEEE ICC 2019, Publisher: Institute of Electrical and Electronics Engineers, ISSN: 0536-1486

In distributed software-defined networks (SDN), mul-tiple physical SDN controllers, each managing a networkdomain,are implemented to balance centralized control, scalability andreliability requirements. In such networking paradigm, controllerssynchronize with each other to maintain a logically centralizednetwork view. Despite various proposals of distributed SDNcontroller architectures, most existing works only assume thatsuch logically centralized network viewcanbe achieved withsome synchronization designs, but the question ofhowexactlycontrollers should synchronize with each other to maximizethe benefits of synchronization under the eventual consistencyassumptions is largely overlooked. To this end, we formulatethe controller synchronization problem as aMarkov DecisionProcess (MDP)and apply reinforcement learning techniquescombined with deep neural network to train asmartcontrollersynchronization policy, which we call theDeep-Q (DQ) Scheduler.Evaluation results show that DQ Scheduler outperforms the anti-entropy algorithm implemented in the ONOS controller by up to95.2%for inter-domain routing tasks.

Conference paper

Poularakis K, Qin Q, Ma L, Kompella S, Leung K, Tassiulas Let al., 2019, Learning the optimal synchronization rates in distributed SDN control architectures, IEEE Infocom, Publisher: Institute of Electrical and Electronics Engineers, Pages: 1099-1107, ISSN: 0743-166X

Since the early development of Software-DefinedNetwork (SDN) technology, researchers have been concernedwith the idea of physical distribution of the control plane to ad-dress scalability and reliability challenges of centralized designs.However, having multiple controllers managing the networkwhile maintaining a “logically-centralized” network view bringsadditional challenges. One such challenge is how to coordinatethe management decisions made by the controllers which isusually achieved by disseminating synchronization messages ina peer-to-peer manner. While there exist many architecturesand protocols to ensure synchronized network views and drivecoordination among controllers, there is no systematic method-ology for deciding the optimal frequency (or rate) of messagedissemination. In this paper, we fill this gap by introducingthe SDN synchronization problem: how often to synchronize thenetwork views for each controller pair. We consider two differentobjectives; first, the maximization of the number of controllerpairs that are synchronized, and second, the maximization of theperformance of applications of interest which may be affectedby the synchronization rate. Using techniques from knapsackoptimization and learning theory, we derive algorithms withprovable performance guarantees for each objective. Evaluationresults demonstrate significant benefits over baseline schemes thatsynchronize all controller pairs at equal rate.

Conference paper

Wang S, Urgaonkar R, Zafer M, He T, Chan K, Leung KKet al., 2019, Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process, IEEE-ACM TRANSACTIONS ON NETWORKING, Vol: 27, Pages: 1272-1288, ISSN: 1063-6692

Journal article

Wang S, Tuor T, Salonidis T, Leung KK, Makaya C, He T, Chan Ket al., 2019, Adaptive federated learning in resource constrained edge computing systems, IEEE Journal on Selected Areas in Communications, Vol: 37, Pages: 1205-1221, ISSN: 0733-8716

Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradientdescent based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

Journal article

Zafari F, Gkelias A, Leung KK, 2019, A survey of indoor localization systems and technologies, Communications Surveys and Tutorials, Vol: 21, Pages: 2568-2599, ISSN: 1553-877X

Indoor localization has recently witnessed an increase in interest, due to the potential wide range of services it can provide by leveraging Internet of Things (IoT), and ubiquitous connectivity. Different techniques, wireless technologies and mechanisms have been proposed in the literature to provide indoor localization services in order to improve the services provided to the users. However, there is a lack of an up-to-date survey paper that incorporates some of the recently proposed accurate and reliable localization systems. In this paper, we aim to provide a detailed survey of different indoor localization techniques such as Angle of Arrival (AoA), Time of Flight (ToF), Return Time of Flight (RTOF), and Received Signal Strength (RSS); based on technologies such as WiFi, Radio Frequency Identification Device (RFID), Ultra Wideband (UWB), Bluetooth and systems that have been proposed in the literature. The paper primarily discusses localization and positioning of human users and their devices. We highlight the strengths of the existing systems proposed in the literature. In contrast with the existing surveys, we also evaluate different systems from the perspective of energy efficiency, availability, cost, reception range, latency, scalability and tracking accuracy. Rather than comparing the technologies or techniques, we compare the localization systems and summarize their working principle. We also discuss remaining challenges to accurate indoor localization.

Journal article

Lee W-H, Ko BJ, Wang S, Liu C, Leung KKet al., 2019, EXACT INCREMENTAL AND DECREMENTAL LEARNING FOR LS-SVM, 26th IEEE International Conference on Image Processing (ICIP), Publisher: IEEE, Pages: 2334-2338, ISSN: 1522-4880

Conference paper

Conway-Jones D, Tuor T, Wang S, Leung KKet al., 2019, Demonstration of Federated Learning in a Resource-Constrained Networked Environment, 5th IEEE International Conference on Smart Computing (SMARTCOMP), Publisher: IEEE, Pages: 484-486

Conference paper

Zafari F, Leung KK, Towsley D, Basu P, Swami Aet al., 2019, A Game-Theoretic Framework for Resource Sharing in Clouds, Publisher: IEEE

Working paper

Panigrahy NK, Basu P, Nain P, Towsley D, Swami A, Chan KS, Leung KKet al., 2019, Resource Allocation in One-dimensional Distributed Service Networks, 2019 IEEE 27TH INTERNATIONAL SYMPOSIUM ON MODELING, ANALYSIS, AND SIMULATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS (MASCOTS 2019), Pages: 14-26, ISSN: 1526-7539

Journal article

Poularakis K, Qin Q, Marcus KM, Chan KS, Leung KK, Tassiulas Let al., 2019, Hybrid SDN Control in Mobile Ad Hoc Networks, 5th IEEE International Conference on Smart Computing (SMARTCOMP), Publisher: IEEE, Pages: 110-114

Conference paper

Leung KK, Wang S, Tuor T, Salonidis T, Makaya C, He T, Chan Ket al., 2018, When edge meets learning: adaptive control for resource-constrained distributed machine learning, IEEE Infocom 2018, Publisher: IEEE

Emerging technologies and applications includingInternet of Things (IoT), social networking, and crowd-sourcinggenerate large amounts of data at the network edge. Machinelearning models are often built from the collected data, to enablethe detection, classification, and prediction of future events.Due to bandwidth, storage, and privacy concerns, it is oftenimpractical to send all the data to a centralized location. In thispaper, we consider the problem of learning model parametersfrom data distributed across multiple edge nodes, without sendingraw data to a centralized place. Our focus is on a generic classof machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate ofdistributed gradient descent from a theoretical point of view,based on which we propose a control algorithm that determinesthe best trade-off between local update and global parameteraggregation to minimize the loss function under a given resourcebudget. The performance of the proposed algorithm is evaluatedvia extensive experiments with real datasets, both on a networkedprototype system and in a larger-scale simulated environment.The experimentation results show that our proposed approachperforms near to the optimum with various machine learningmodels and different data distributions.

Conference paper

Tuor T, Wang S, Salonidis T, Ko BJ, Leung KKet al., 2018, Demo Abstract: Distributed Machine Learning at Resource-Limited Edge Nodes, IEEE Conference on Computer Communications (IEEE INFOCOM), Publisher: IEEE, ISSN: 2159-4228

Conference paper

Ko BJ, Leung KK, Salonidis T, 2018, Machine learning for dynamic resource allocation at network edge, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X

With the proliferation of smart devices, it is increasingly important to exploit their computing, networking, and storage resources for executing various computing tasks at scale at mobile network edges, bringing many benefits such as better response time, network bandwidth savings, and improved data privacy and security. A key component in enabling such distributed edge computing is a mechanism that can flexibly and dynamically manage edge resources for running various military and commercial applications in a manner adaptive to the fluctuating demands and resource availability. We present methods and an architecture for the edge resource management based on machine learning techniques. A collaborative filtering approach combined with deep learning is proposed as a means to build the predictive model for applications’ performance on resources from previous observations, and an online resource allocation architecture utilizing the predictive model is presented. We also identify relevant research topics for further investigation.

Conference paper

Ma L, Zhang Z, Ko B, Srivatsa M, Leung KKet al., 2018, Resource management in distributed SDN using reinforcement learning, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X

Distributed software-defined networking (SDN), which consists of multiple inter-connected network domains and each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized/distributed control. Under such networking paradigm, resource management among various domains (e.g., optimal resource allocation) can be extremely challenging. This is because many tasks posted to the network require resources (e.g., CPU, memory, bandwidth, etc.) from different domains, where cross-domain resources are correlated, e.g., their feasibility depends on the existence of a reliable communication channel connecting them. To address this issue, we employ the reinforcement learning framework, targeting to automate this resource management and allocation process by proactive learning and interactions. Specifically, we model this issue as an MDP (Markov Decision Process) problem with different types of reward functions, where our objective is to minimize the average task completion time. Regarding this problem, we investigate the scenario where the resource status among controllers is fully synchronized. Under such scenario, the SDN controller has complete knowledge of the resource status of all domains, i.e., resource changes upon any policies are directly observable by controllers, for which Q-learning-based strategy is proposed to approach the optimal solution.

Conference paper

Tuor T, Wang S, Leung KK, Ko BJet al., 2018, Understanding information leakage of distributed inference with deep neural networks: Overview of information theoretic approach and initial results, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X

With the emergence of Internet of Things (IoT) and edge computing applications, data is often generated by sensors and end users at the network edge, and decisions are made using these collected data. Edge devices often require cloud services in order to perform intensive inference tasks. Consequently, the inference of deep neural network (DNN) model is often partitioned between the edge and the cloud. In this case, the edge device performs inference up to an intermediate layer of the DNN, and offloads the output features to the cloud for the inference of the remaining of the network. Partitioning a DNN can help to improve energy efficiency but also rises some privacy concerns. The cloud platform can recover part of the raw data using intermediate results of the inference task. Recently, studies have also quantified an information theoretic trade-off between compression and prediction in DNNs. In this paper, we conduct a simple experiment to understand to which extent is it possible to reconstruct the raw data given the output of an intermediate layer, in other words, to which extent do we leak private information when sending the output of an intermediate layer to the cloud. We also present an overview of mutual-information based studies of DNN, to help understand information leakage and some potential ways to make distributed inference more secure.

Conference paper

Zhang Z, Ma L, Leung KK, Tassiulas L, Tucker Jet al., 2018, Q-placement: Reinforcement-Learning-Based Service Placement in Software-Defined Networks, 38th IEEE International Conference on Distributed Computing Systems (ICDCS), Publisher: IEEE, Pages: 1527-1532, ISSN: 1063-6927

Conference paper

Tuor T, Wang S, Leung KK, Chan Ket al., 2018, Distributed Machine Learning in Coalition Environments: Overview of Techniques, 21st International Conference on Information Fusion (FUSION), Publisher: IEEE, Pages: 814-821

Conference paper

Li J, Zafari F, Towsley D, Leung KK, Swami Aet al., 2018, Joint Data Compression and Caching: Approaching Optimality with Guarantees, PROCEEDINGS OF THE 2018 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE '18), Pages: 229-240

Journal article

Zafari F, Li J, Leung KK, Towsley D, Swami Aet al., 2018, A Game-Theoretic Approach to Multi-Objective Resource Sharing and Allocation in Mobile Edge Clouds, Technologies for the Wireless Edge Workshop (EdgeTech), Publisher: ASSOC COMPUTING MACHINERY, Pages: 9-13

Conference paper

Rossi G, Leung KK, 2017, Optimal CSMA/CA protocol for safety messages in vehicular Ad-Hoc networks, IEEE Symposium on Computers and Communications (ISCC) 2017, Publisher: IEEE

Vehicular ad-hoc networks (VANETs) that enablecommunication among vehicles have recently attracted signif-icant interest from researchers, due to the range of practicalapplications they can facilitate, particularly related to roadsafety. Despite the stringent performance requirements for suchapplications, the IEEE 802.11p standard still uses the carriersensing medium access/collision avoidance (CSMA/CA) protocol.The latter when used in broadcast fashion employs a randomlyselected backoff period from a fixed contention window (CW)range, which can cause performance degradation as a result ofvehicular density changes. Concerns regarding the robustnessand adaptiveness of protocols to support time-critical applica-tions have been raised, which motivate this work. This paperinvestigates how the maximum CW size can be optimised toenhance performance based on vehicular density. A stochasticmodel is developed to obtain the optimal maximum CW that canbe integrated in an amended CSMA/CA protocol to maximisethe single-hop throughput among adjacent vehicles. Simulationsconfirm our optimised protocol can greatly improve the channelthroughput and transmission delay performance, when comparedto the standardised CSMA/CA, to support safety application inVANETs.

Conference paper

Liu CH, Zhang B, Su X, Ma J, Wang W, Leung KKet al., 2017, Energy-aware participant selection for smartphone-enabled mobile crowd sensing, IEEE Systems Journal, Vol: 11, Pages: 1435-1446, ISSN: 1932-8184

Mobile crowd sensing systems have been widely used in various domains but are currently facing new challenges. On one hand, the increasingly complex services need a large number of participants to satisfy their demand for sensory data with multidimensional high quality-of-information (QoI) requirements. On the other hand, the willingness of their participation is not always at a high level due to the energy consumption and its impacts on their regular activities. In this paper, we introduce a new metric, called “QoI satisfaction ratio,” to quantify how much collected sensory data can satisfy a multidimensional task's QoI requirements in terms of data granularity and quantity. Furthermore, we propose a participant sampling behavior model to quantify the relationship between the initial energy and the participation of participants. Finally, we present a QoI-aware energy-efficient participant selection approach to provide a suboptimal solution to the defined optimization problem. Finally, we have compared our proposed scheme with existing methods via extensive simulations based on the real movement traces of ordinary citizens in Beijing. Extensive simulation results well justify the effectiveness and robustness of our approach.

Journal article

Machen A, Wang S, Leung KK, Ko BJ, Salonidis Tet al., 2017, Live Service Migration in Mobile Edge Clouds, IEEE WIRELESS COMMUNICATIONS, Vol: 25, Pages: 140-147, ISSN: 1536-1284

Mobile edge clouds (MECs) bring the benefits of the cloud closer to the user, by installing small cloud infrastructures at the network edge. This enables a new breed of real-time applications, such as instantaneous object recognition and safety assistance in intelligent transportation systems, that require very low latency. One key issue that comes with proximity is how to ensure that users always receive good performance as they move across different locations. Migrating services between MECs is seen as the means to achieve this. This article presents a layered framework for migrating active service applications that are encapsulated either in virtual machines (VMs) or containers. This layering approach allows a substantial reduction in service downtime. The framework is easy to implement using readily available technologies, and one of its key advantages is that it supports containers, which is a promising emerging technology that offers tangible benefits over VMs. The migration performance of various real applications is evaluated by experiments under the presented framework. Insights drawn from the experimentation results are discussed.

Journal article

He T, Gkelias A, Ma L, Leung KK, Swami A, Towsley Det al., 2017, Robust and efficient monitor placement for network tomography in dynamic networks, IEEE/ACM Transactions on Networking, Vol: 25, Pages: 1732-1745, ISSN: 1063-6692

We consider the problem of placing the minimum number of monitors in a dynamic network to identify additive link metrics from path metrics measured along cycle-free paths between monitors. Our goal is robust monitor placement, i.e., the same set of monitors can maintain network identifiability under topology changes. Our main contribution is a set of monitor placement algorithms with different performance-complexity tradeoffs that can simultaneously identify multiple topologies occurring during the network lifetime. In particular, we show that the optimal monitor placement is the solution to a generalized hitting set problem, for which we provide a polynomial-time algorithm to construct the input and a greedy algorithm to select the monitors with logarithmic approximation. Although the optimal placement is NP-hard in general, we identify non-trivial special cases that can be solved efficiently. Our secondary contribution is a dynamic triconnected decomposition algorithm to compute the input needed by the monitor placement algorithms, which is the first such algorithm that can handle edge deletions. Our evaluations on mobility-induced dynamic topologies verify the efficiency and the robustness of the proposed algorithms.

Journal article

Rossi G, Fan Z, Chin WH, leung KKet al., 2017, Stable clustering for Ad-Hoc vehicle networking, IEEE Wireless Communications and Networking Conference (WCNC) 2017, ISSN: 1558-2612

Vehicular ad-hoc networks (VANETs) that enable communication among vehicles and between vehicles and un- manned aerial vehicles (UAVs) and cellular base stations have re- cently attracted significant interest from the research community, due to the wide range of practical applications they can facilitate (e.g. road safety, traffic management, pollution monitoring and rescue missions). Despite this increased research activity, the high vehicle mobility in a VANET raises concerns regarding the robustness and adaptiveness of such networks to support system applications. Instead of allowing direct communications between every vehicle to UAVs or base stations, clustering methods will potentially be efficient to overcome bandwidth, power consump- tion and other resource issues. Using the clustering technique, neighbouring vehicles are grouped into clusters with a particular vehicle elected as the Custer Head (CH) in each cluster. Each vehicle communicates with UAVs or base stations through the CH of the associated cluster. Despite the potential advantages, a major challenge for clustering techniques is to maintain cluster stability in light of vehicle mobility and radio fluctuation. In this paper, we propose a Stable Clustering Algorithm for vehicular ad hoc networks (SCalE). Two novel features are incorporated into the algorithm: knowledge of the vehicles behaviour for efficient selection of CHs, and the employment of a backup CH to maintain the stability of cluster structures. By simulation methods, these are shown to increase stability and improve performance when compared to existing clustering algorithms.

Conference paper

Wang S, Urgaonkar R, He T, Chan K, Zafer M, Leung KKet al., 2017, Dynamic service placement for mobile micro-clouds with predicted future costs, IEEE Transactions on Parallel and Distributed Systems, Vol: 28, Pages: 1002-1016, ISSN: 1045-9219

Mobile micro-clouds are promising for enabling performance-critical cloud applications. However, one challenge therein is the dynamics at the network edge. In this paper, we study how to place service instances to cope with these dynamics, where multiple users and service instances coexist in the system. Our goal is to find the optimal placement (configuration) of instances to minimize the average cost overtime, leveraging the ability of predicting future cost parameters with known accuracy. We first propose an offline algorithm that solves for the optimal configuration in a specific look-ahead time-window. Then, we propose an online approximation algorithm with polynomial time-complexity to find the placement in real-time whenever an instance arrives. We analytically show that the online algorithm is 0(1)-competitive for a broad family of cost functions. Afterwards, the impact of prediction errors is considered and a method for finding the optimal look-ahead window size is proposed, which minimizes an upper bound of the average actual cost. The effectiveness of the proposed approach is evaluated by simulations with both synthetic and real-world (San Francisco taxi) usermobility traces. The theoretical methodology used in this paper can potentially be applied to a larger class of dynamic resource allocation problems.

Journal article

Rossi G, Leung KK, 2017, Optimised CSMA protocol to support efficient clustering for vehicular internetworking, IEEE Wireless Communications and Networking Conference (WCNC) 2017, Publisher: IEEE

Vehicular ad-hoc networks (VANETs) that supportcommunication among vehicles can facilitate a wide range ofroad-safety applications. To deal with network fragmentationfor low vehicular density, clusters of neighbouring vehicles canbe formed. Clustering techniques also require timely commu-nications among vehicles. Despite the stringent performancerequirements for the safety and clustering applications, theIEEE 802.11p standard still employs the carrier sensing mediumaccess/collision avoidance (CSMA/CA) protocol that has a fixedcontention window (CW) range for backoff. This results insignificant inefficiency as vehicular density changes. This workinvestigates how the maximum CW size can be optimised toenhance performance based on vehicular density by exploitingthe equivalence between the CSMA/CA and Aloha performancemodels. Simulation shows a great reduction in transmission delayfor the proposed protocol when compared with the standardisedone. Thus, with the low latency, the new protocol is useful to thevehicle clustering and road-safety applications.

Conference paper

Wang S, Zafer M, Leung KK, 2017, Online placement of multi-component applications in edge computing environments, IEEE Access, Vol: 5, Pages: 2514-2533, ISSN: 2169-3536

Mobile edge computing is a new cloud computingparadigm which makes use of small-sized edge-clouds to providereal-time services to users. These mobile edge-clouds (MECs)are located in close proximity to users, thus enabling users toseamlessly access applications running on MECs. Due to the coexistenceof the core (centralized) cloud, users, and one or multiplelayers of MECs, an important problem is to decide where (onwhich computational entity) to place different components of anapplication. This problem, known as the application or workloadplacement problem, is notoriously hard, and therefore, heuristicalgorithms without performance guarantees are generallyemployed in common practice, which may unknowingly sufferfrom poor performance as compared to the optimal solution.In this paper, we address the application placement problemand focus on developing algorithms with provable performancebounds. We model the user application as an application graphand the physical computing system as a physical graph, withresource demands/availabilities annotated on these graphs. Wefirst consider the placement of a linear application graph andpropose an algorithm for finding its optimal solution. Using thisresult, we then generalize the formulation and obtain onlineapproximation algorithms with polynomial-logarithmic (poly-log)competitive ratio for tree application graph placement.We jointlyconsider node and link assignment, and incorporate multipletypes of computational resources at nodes.

Journal article

Leung KK, Nazemi S, Swami A, 2016, QoI-aware Tradeoff Between Communication and Computation in Wireless Ad-hoc Networks, 27th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Publisher: IEEE, ISSN: 2166-9589

Data aggregation techniques exploit spatial and temporalcorrelations among data and aggregate data into a smallervolume as a means to optimize usage of limited network resourcesincluding energy. There is a trade-off among the Quality ofInformation (QoI) requirement and energy consumption for computationand communication. We formulate the energy-efficientdata aggregation problem as a non-linear optimization problemto optimize the trade-off and control the degree of informationreduction at each node subject to given QoI requirement. Usingthe theory of duality optimization, we prove that under a set ofreasonable cost assumptions, the optimal solution can be obtaineddespite non-convexity of the problem. Moreover, we propose adistributed, iterative algorithm that will converge to the optimalsolution. Extensive numerical results are presented to confirmthe validity of the proposed solution approach.

Conference paper

Liu CH, Leung VCM, Zhang Y, Leung KKet al., 2016, Guest Editorial Special Issue on Software Defined Wireless Sensor Networks, IEEE SENSORS JOURNAL, Vol: 16, Pages: 7303-7303, ISSN: 1530-437X

Journal article

Nazemi S, Leung KK, Swami A, 2016, Optimization framework with reduced complexity for sensor networks with in-network processing., IEEE Wireless Communications and Networking Conference (WCNC), Publisher: IEEE, ISSN: 1525-3511

We propose a framework for optimizing in-network processing (INP) in wireless sensor networks. INP provides a platform for processing (e.g., fusing, aggregating or compressing) the data along the transmission routes in the sensor network. This can reduce the volume of transmitted data, therefore optimizing the utilization of energy and bandwidth. However, such data processing must ensure that the end result can meet given QoI requirements. We formulate the QoI-aware INP problem as a non-linear optimization problem to identify the optimal degree of data compression at each sensor node subject to satisfying a QoI requirement for the end-user. The formulation arranges all involved sensor nodes in a tree where data is transfered and processed from nodes to their parent nodes toward the root node of the tree. Under the assumption of uniform parameter setting, we show that the processing tree can be collapsed into a linear graph where the number of nodes represents the node levels of the original processing tree. This represents a significant reduction in complexity of the problem. Numerical example are provided to illustrate the performance of the proposed approach.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00401931&limit=30&person=true&page=2&respub-action=search.html