264 results found
Zafari F, Leung KK, Towsley D, et al., 2021, Let's share: a game-theoretic framework for resource sharing in mobile edge clouds, IEEE Transactions on Network and Service Management, Vol: 18, Pages: 2107-2122, ISSN: 1932-4537
Mobile edge computing seeks to provide resources to different delay-sensitive applications. This is a challenging problem as an edge cloud-service provider may not have sufficient resources to satisfy all resource requests. Furthermore, allocating available resources optimally to different applications is also challenging. Resource sharing among different edge cloud-service providers can address the aforementioned limitation as certain service providers may have resources available that can be “rented” by other service providers. However, edge cloud service providers can have different objectives or utilities . Therefore, there is a need for an efficient and effective mechanism to share resources among service providers, while considering the different objectives of various providers. We model resource sharing as a multi-objective optimization problem and present a solution framework based on Cooperative Game Theory (CGT). We consider the strategy where each service provider allocates resources to its native applications first and shares the remaining resources with applications from other service providers. We prove that for a monotonic, non-decreasing utility function, the game is canonical and convex. Hence, the core is not empty and the grand coalition is stable. We propose two algorithms, Game-theoretic Pareto optimal allocation (GPOA) and Polyandrous-Polygamous Matching based Pareto Optimal Allocation (PPMPOA) that provide allocations from the core. Hence the obtained allocations are Pareto optimal and the grand coalition of all the service providers is stable. Experimental results confirm that our proposed resource sharing framework improves utilities of edge cloud-service providers and application request satisfaction.
Zhang Z, Ma L, Leung KK, et al., 2021, More is not always better: an analytical study of controller synchronizations in distributed SDN, IEEE/ACM Transactions on Networking, Pages: 1-11, ISSN: 1063-6692
Distributed software-defined networks (SDN), consisting of multiple inter-connected network domains, each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized control and distributed operations. In such networking paradigm, most existing works focus on designing sophisticated controller-synchronization strategies to improve joint controller-decision-making for inter-domain routing. However, there is still a lack of fundamental understanding of how the performance of distributed SDN is related to network attributes, thus impossible to justify the necessity of complicated strategies. In this regard, we analyse and quantify how the performance enhancement of distributed SDN architectures is influenced by inter-domain synchronization levels, in terms of the resulting number of abstracted routing clusters, and network structural properties. Based on a generic network model incorporating link preference for path constructions, we establish analytical lower bounds for quantifying the routing performance under any arbitrarily given network synchronization status. The significance of these performance bounds is that they can be used to quantify the contribution of controller synchronization levels in improving the network performance under different network parameters, which therefore serves as a fundamental guidance for future SDN performance analysis and protocol designs.
Liu CH, Dai Z, Zhao Y, et al., 2021, Distributed and Energy-Efficient Mobile Crowdsensing with Charging Stations by Deep Reinforcement Learning, IEEE TRANSACTIONS ON MOBILE COMPUTING, Vol: 20, Pages: 130-146, ISSN: 1536-1233
Pritz PJ, Perez D, Leung KK, 2020, Fast-fourier-forecasting resource utilisation in distributed systems, 29th International Conference on Computer Communications and Networks (ICCCN), Publisher: IEEE, Pages: 1-9, ISSN: 1095-2055
Distributed computing systems often consist of hundreds of nodes (machines), executing tasks with different resource requirements. Efficient resource provisioning and task scheduling in such systems are non-trivial and require close monitoring and accurate forecasting of the state of the system, specifically resource utilisation at its constituent machines. Two challenges present themselves towards these objectives.First, collecting monitoring data entails substantial communication overhead. This overhead can be prohibitively high, especially in networks where bandwidth is limited. Second, forecasting models to predict resource utilisation should be accurate and also need to exhibit high inference speed. Mission critical scheduling and resource allocation algorithms use these predictions and rely on their immediate availability.To address the first challenge, we present a communication-efficient data collection mechanism. Resource utilisation data is collected at the individual machines in the system and transmitted to a central controller in batches. Each batch is processed by an adaptive data-reduction algorithm based on Fourier transforms and truncation in the frequency domain. We show that the proposed mechanism leads to a significant reduction in communication overhead while incurring only minimal error and adhering to accuracy guarantees. To address the second challenge, we propose a deep learning architecture using complex Gated Recurrent Units to forecast resource utilisation. This architecture is directly integrated with the above data collection mechanism to improve inference speed of the presented forecasting model. Using two real-world datasets, we demonstrate the effectiveness of our approach, both in terms of forecasting accuracy and inference speed.Our approach resolves several challenges encountered in resource provisioning frameworks and can also be generically applied to other forecasting problems.
Panigrahy NK, Basu P, Nain P, et al., 2020, Resource allocation in one-dimensional distributed service networks with applications, Performance Evaluation, Vol: 142, Pages: 1-25, ISSN: 0166-5316
We consider assignment policies that allocate resources to users, where both resources and users are located ona one-dimensional line [0, ∞). First, we consider unidirectionalassignment policies that allocate resources only to users locatedto their left. We propose the Move to Right (MTR) policy, whichscans from left to right assigning nearest rightmost availableresource to a user, and contrast it to the Unidirectional GaleShapley (UGS) matching policy. While both policies among allunidirectional policies, minimize the expected distance traveledby a request (request distance), MTR is fairer. Moreover, weshow that when user and resource locations are modeled bystatistical point processes, and resources are allowed to satisfymore than one user, the spatial system under unidirectionalpolicies can be mapped into bulk service queueing systems, thusallowing the application of many queueing theory results thatyield closed form expressions. As we consider a case wheredifferent resources can satisfy different numbers of users, wealso generate new results for bulk service queues. We alsoconsider bidirectional policies where there are no directionalrestrictions on resource allocation and develop an algorithm forcomputing the optimal assignment which is more efficient thanknown algorithms in the literature when there are more resourcesthan users. Finally, numerical evaluation of performance ofunidirectional and bidirectional allocation schemes yields designguidelines beneficial for resource placement.
Han P, Wang S, Leung KK, 2020, Capacity analysis of distributed computing systems with multiple resource types, IEEE Wireless Communications and Networking Conference (IEEE WCNC), Publisher: IEEE, Pages: 1-6, ISSN: 1525-3511
In cloud and edge computing systems, computation, communication, and memory resources are distributed across different physical machines and can be used to execute computational tasks requested by different users. It is challenging to characterize the capacity of such a distributed system, because there exist multiple types of resources and the amount of resources required by different tasks is random. In this paper, we define the capacity as the number of tasks that the system can support with a given overload/outage probability. We derive theoretical formulas for the capacity of distributed systems with multiple resource types, where we consider the power of d choices as the task scheduling strategy in the analysis. Our analytical results describe the capacity of distributed computing systems, which can be used for planning purposes or assisting the scheduling and admission decisions of tasks to various resources in the system. Simulation results using both synthetic and real-world data are also presented to validate the capacity bounds.
Zafari F, Li J, Leung KK, et al., 2020, Optimal energy consumption for communication, computation, caching and quality guarantee, IEEE Transactions on Control of Network Systems, Vol: 7, Pages: 151-162, ISSN: 2325-5870
Energy efficiency is a fundamental requirement of modern data-communication systems, and its importance is reflected in much recent work on performance analysis of system energy consumption. However, most work has only focused on communication and computation costs without accounting for data caching costs. Given the increasing interest in cache networks, this is a serious deficiency. In this paper, we consider the problem of energy consumption in data communication, computation and caching (C3) with a quality-of-information (QoI) guarantee in a communication network. Our goal is to identify the optimal data compression rates and cache placement over the network that minimizes the overall energy consumption in the network. We formulate the problem as a mixed integer nonlinear programming (MINLP) problem with nonconvex functions, which is non-deterministic polynomial-time hard (NP-hard) in general. We propose a variant of the spatial branch-and-bound algorithm (V-SBB) that can provide an ϵ-global optimal solution to the problem. By extensive numerical experiments, we show that the C3 optimization framework improves the energy efficiency by up to 88% compared to any optimization that only considers either communication and caching or communication and computation. Furthermore, the V-SBB technique provides comparatively better solutions than some other MINLP solvers at the cost of additional computation time.
Liu CH, Zhao Y, Dai Z, et al., 2020, Curiosity-Driven Energy-Efficient Worker Scheduling in Vehicular Crowdsourcing: A Deep Reinforcement Learning Approach, IEEE 36th International Conference on Data Engineering (ICDE), Publisher: IEEE COMPUTER SOC, Pages: 25-36, ISSN: 1084-4627
Qin Q, Poularakis K, Leung KK, et al., 2020, Line-Speed and Scalable Intrusion Detection at the Network Edge via Federated Learning, 19th IFIP Networking Conference (Networking), Publisher: IEEE, Pages: 352-360
Leung K, Nazemi S, Swami A, 2019, Distributed optimization framework for in-network data processing, IEEE ACM Transactions on Networking, Vol: 27, Pages: 2432-2443, ISSN: 1063-6692
In-Network Processing (INP) is an effective way to aggregate and process data from different sources and forward the aggregated data to other nodes for further processing until it reaches the end user. There is a trade-off between energy consumption for processing data and communication energy spent on transferring the data. Specifically, aggressive data aggregation consumes much energy for processing, but results in less data for transmission, thus using less energy for communications, andvice versa. An essential requirement in the INP process is to ensure that the user expectation of quality of information (QoI) is delivered during the process. Using wireless sensor networks for illustration and with the aim of minimising the total energy consumption of the system, we study and formulate the trade-off problem as a nonlinear optimisation problem where the goal is to determine the optimal data reduction rate, while satisfying the QoI required by the user. The formulated problem is a Signomial Programming (SP) problem, which is a non-convex optimisationproblem and very hard to be solved directly. We propose two solution frameworks. First, we introduce an equivalent problem which is still SP and non-convex as the original one, but we prove that the strong duality property holds, and propose an efficient distributed algorithm to obtain the optimal data reduction rates, while delivering the required QoI. The second framework applies to the system with identical nodes and parameter settings. In such cases, we prove that the complexity of the problem can be reduced logarithmically. We evaluate our proposed frameworks under different parameter settings and illustrate the validity and performance of the proposed techniques through extensive simulation.
Shanmukhappa T, Ho IW-H, Tse CK, et al., 2019, Recent development in public transport network analysis from the complex network perspective, IEEE Circuits and Systems Magazine, Vol: 19, Pages: 39-65, ISSN: 1049-3654
A graph, comprising a set of nodes connected by edges, is one of the simplest yet remarkably useful mathematical structures for the analysis of real-world complex systems. Network theory, being an application-based extension of graph theory, has been applied to a wide variety of real-world systems involving complex interconnection of subsystems. The application of network theory has permitted in-depth understanding of connectivity, topologies, and operations of many practical networked systems as well as the roles that various parameters play in determining the performance of such systems. In the field of transportation networks, however, the use of graph theory has been relatively much less explored, and this motivates us to bring together the recent development in the field of public transport analysis from a graph theoretic perspective. In this paper, we focus on ground transportation, and in particular the bus transport network (BTN) and metro transport network (MTN), since the two types of networks are widely used by the public and their performances have significant impact to people's life. In the course of our analysis, various network parameters are introduced to probe into the impact of topologies and their relative merits and demerits in transportation. The various local and global properties evaluated as part of the topological analysis provide a common platform to comprehend and decipher the inherent network features that are partly encoded in their topological properties. Overall, this paper gives a detailed exposition of recent development in the use of graph theory in public transport network analysis, and summarizes the key results that offer important insights for government agencies and public transport system operators to plan, design, and optimize future public transport networks in order to achieve more efficient and robust services.
Zhang Z, Ma L, Poularakis K, et al., 2019, MACS: deep reinforcement learning based SDN controller synchronization policy design, 27th IEEE International Conference on Network Protocols (IEEE ICNP), Publisher: IEEE COMPUTER SOC, Pages: 1-11, ISSN: 1092-1648
In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralised control, scalability, and reliability requirements. In such networking paradigms, controllers synchronize with each other, in attempts to maintain a logically centralised network view. Despite the presence of various design proposals for distributed SDN controller architectures, most existing works only aim at eliminating anomalies arising from the inconsistencies in different controllers' network views. However, the performance aspect of controller synchronization designs with respect to given SDN applications are generally missing. To fill this gap, we formulate the controller synchronization problem as a Markov decision process (MDP) and apply reinforcement learning techniques combined with deep neural networks (DNNs) to train a smart, scalable, and fine-grained controller synchronization policy, called the Multi-Armed Cooperative Synchronization (MACS), whose goal is to maximise the performance enhancements brought by controller synchronizations. Evaluation results confirm the DNN's exceptional ability in abstracting latent patterns in the distributed SDN environment, rendering significant superiority to MACS-based synchronization policy, which are 56% and 30% performance improvements over ONOS and greedy SDN controller synchronization heuristics.
Tuor T, Wang S, Leung KK, et al., 2019, Online collection and forecasting of resource utilization in large-scale distributed systems, 39th IEEE International Conference on Distributed Computing Systems (ICDCS), Publisher: IEEE COMPUTER SOC, Pages: 133-143, ISSN: 1063-6927
Large-scale distributed computing systems often contain thousands of distributed nodes (machines). Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller. Meanwhile, it is often useful to forecast the future system conditions for various purposes such as resource planning/allocation and anomaly detection, but it is usually too resource-consuming to have one forecasting model running for each node, which may also neglect correlations in observed metrics across different nodes. In this paper, we propose a mechanism for collecting and forecasting the resource utilization of machines in a distributed computing system in a scalable manner. We present an algorithm that allows each local node to decide when to transmit its most recent measurement to the central node, so that the transmission frequency is kept below a given constraint value. Based on the measurements received from local nodes, the central node summarizes the received data into a small number of clusters. Since the cluster partitioning can change over time, we also present a method to capture the evolution of clusters and their centroids. As an effective way to reduce the amount of computation, time-series forecasting models are trained on the time-varying centroids of each cluster, to forecast the future resource utilizations of a group of local nodes. The effectiveness of our proposed approach is confirmed by extensive experiments using multiple real-world datasets.
Zhang Z, Ma L, Leung KK, et al., 2019, How advantageous Is It? An analytical study of controller-assisted path construction in distributed SDN, IEEE ACM Transactions on Networking, Vol: 27, Pages: 1643-1656, ISSN: 1063-6692
Distributed software-defined networks (SDN), consisting of multiple inter-connected network domains, each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized control and distributed operations. Under such a networking paradigm, most existing works focus on designing sophisticated controller-synchronization strategies to improve joint controller-decision-making for inter-domain routing. However, there is still a lack of fundamental understanding of how the performance of distributed SDN is related to network attributes, thus it is impossible to justify the necessity of complicated strategies. In this regard, we analyze and quantify the performance enhancement of distributed SDN architectures, which is influenced by intra-/inter-domain synchronization levels and network structural properties. Based on a generic network model, we establish analytical methods for performance estimation under four canonical inter-domain synchronization scenarios. Specifically, we first derive an asymptotic expression to quantify how dominating structural and synchronization-related parameters affect the performance metric. We then provide performance analytics for an important family of networks, where all links are of equal preference for path constructions. Finally, we establish fine-grained performance metric expressions for networks with dynamically adjusted link preferences. Our theoretical results reveal how network performance is related to synchronization levels and intra-/inter-domain connections, the accuracy of which is confirmed by simulations based on both real and synthetic networks. To the best of our knowledge, this is the first work quantifying the performance of distributed SDN in terms of network structural properties and synchronization levels.
Zhang Z, Ma L, Poularakis K, et al., 2019, DQ Scheuler: deep reinforcement learning based controller synchronization in distributed SDN, IEEE ICC 2019, Publisher: Institute of Electrical and Electronics Engineers, ISSN: 0536-1486
In distributed software-defined networks (SDN), mul-tiple physical SDN controllers, each managing a networkdomain,are implemented to balance centralized control, scalability andreliability requirements. In such networking paradigm, controllerssynchronize with each other to maintain a logically centralizednetwork view. Despite various proposals of distributed SDNcontroller architectures, most existing works only assume thatsuch logically centralized network viewcanbe achieved withsome synchronization designs, but the question ofhowexactlycontrollers should synchronize with each other to maximizethe benefits of synchronization under the eventual consistencyassumptions is largely overlooked. To this end, we formulatethe controller synchronization problem as aMarkov DecisionProcess (MDP)and apply reinforcement learning techniquescombined with deep neural network to train asmartcontrollersynchronization policy, which we call theDeep-Q (DQ) Scheduler.Evaluation results show that DQ Scheduler outperforms the anti-entropy algorithm implemented in the ONOS controller by up to95.2%for inter-domain routing tasks.
Poularakis K, Qin Q, Ma L, et al., 2019, Learning the optimal synchronization rates in distributed SDN control architectures, IEEE Infocom, Publisher: Institute of Electrical and Electronics Engineers, Pages: 1099-1107, ISSN: 0743-166X
Since the early development of Software-DefinedNetwork (SDN) technology, researchers have been concernedwith the idea of physical distribution of the control plane to ad-dress scalability and reliability challenges of centralized designs.However, having multiple controllers managing the networkwhile maintaining a “logically-centralized” network view bringsadditional challenges. One such challenge is how to coordinatethe management decisions made by the controllers which isusually achieved by disseminating synchronization messages ina peer-to-peer manner. While there exist many architecturesand protocols to ensure synchronized network views and drivecoordination among controllers, there is no systematic method-ology for deciding the optimal frequency (or rate) of messagedissemination. In this paper, we fill this gap by introducingthe SDN synchronization problem: how often to synchronize thenetwork views for each controller pair. We consider two differentobjectives; first, the maximization of the number of controllerpairs that are synchronized, and second, the maximization of theperformance of applications of interest which may be affectedby the synchronization rate. Using techniques from knapsackoptimization and learning theory, we derive algorithms withprovable performance guarantees for each objective. Evaluationresults demonstrate significant benefits over baseline schemes thatsynchronize all controller pairs at equal rate.
Wang S, Urgaonkar R, Zafer M, et al., 2019, Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process, IEEE-ACM TRANSACTIONS ON NETWORKING, Vol: 27, Pages: 1272-1288, ISSN: 1063-6692
Wang S, Tuor T, Salonidis T, et al., 2019, Adaptive federated learning in resource constrained edge computing systems, IEEE Journal on Selected Areas in Communications, Vol: 37, Pages: 1205-1221, ISSN: 0733-8716
Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradientdescent based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
Zafari F, Gkelias A, Leung KK, 2019, A survey of indoor localization systems and technologies, Communications Surveys and Tutorials, Vol: 21, Pages: 2568-2599, ISSN: 1553-877X
Indoor localization has recently witnessed an increase in interest, due to the potential wide range of services it can provide by leveraging Internet of Things (IoT), and ubiquitous connectivity. Different techniques, wireless technologies and mechanisms have been proposed in the literature to provide indoor localization services in order to improve the services provided to the users. However, there is a lack of an up-to-date survey paper that incorporates some of the recently proposed accurate and reliable localization systems. In this paper, we aim to provide a detailed survey of different indoor localization techniques such as Angle of Arrival (AoA), Time of Flight (ToF), Return Time of Flight (RTOF), and Received Signal Strength (RSS); based on technologies such as WiFi, Radio Frequency Identification Device (RFID), Ultra Wideband (UWB), Bluetooth and systems that have been proposed in the literature. The paper primarily discusses localization and positioning of human users and their devices. We highlight the strengths of the existing systems proposed in the literature. In contrast with the existing surveys, we also evaluate different systems from the perspective of energy efficiency, availability, cost, reception range, latency, scalability and tracking accuracy. Rather than comparing the technologies or techniques, we compare the localization systems and summarize their working principle. We also discuss remaining challenges to accurate indoor localization.
Conway-Jones D, Tuor T, Wang S, et al., 2019, Demonstration of Federated Learning in a Resource-Constrained Networked Environment, 5th IEEE International Conference on Smart Computing (SMARTCOMP), Publisher: IEEE, Pages: 484-486
Lee W-H, Ko BJ, Wang S, et al., 2019, EXACT INCREMENTAL AND DECREMENTAL LEARNING FOR LS-SVM, 26th IEEE International Conference on Image Processing (ICIP), Publisher: IEEE, Pages: 2334-2338, ISSN: 1522-4880
Leung KK, Wang S, Tuor T, et al., 2018, When edge meets learning: adaptive control for resource-constrained distributed machine learning, IEEE Infocom 2018, Publisher: IEEE
Emerging technologies and applications includingInternet of Things (IoT), social networking, and crowd-sourcinggenerate large amounts of data at the network edge. Machinelearning models are often built from the collected data, to enablethe detection, classification, and prediction of future events.Due to bandwidth, storage, and privacy concerns, it is oftenimpractical to send all the data to a centralized location. In thispaper, we consider the problem of learning model parametersfrom data distributed across multiple edge nodes, without sendingraw data to a centralized place. Our focus is on a generic classof machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate ofdistributed gradient descent from a theoretical point of view,based on which we propose a control algorithm that determinesthe best trade-off between local update and global parameteraggregation to minimize the loss function under a given resourcebudget. The performance of the proposed algorithm is evaluatedvia extensive experiments with real datasets, both on a networkedprototype system and in a larger-scale simulated environment.The experimentation results show that our proposed approachperforms near to the optimum with various machine learningmodels and different data distributions.
Tuor T, Wang S, Salonidis T, et al., 2018, Demo Abstract: Distributed Machine Learning at Resource-Limited Edge Nodes, IEEE Conference on Computer Communications (IEEE INFOCOM), Publisher: IEEE, ISSN: 2159-4228
Ko BJ, Leung KK, Salonidis T, 2018, Machine learning for dynamic resource allocation at network edge, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X
With the proliferation of smart devices, it is increasingly important to exploit their computing, networking, and storage resources for executing various computing tasks at scale at mobile network edges, bringing many benefits such as better response time, network bandwidth savings, and improved data privacy and security. A key component in enabling such distributed edge computing is a mechanism that can flexibly and dynamically manage edge resources for running various military and commercial applications in a manner adaptive to the fluctuating demands and resource availability. We present methods and an architecture for the edge resource management based on machine learning techniques. A collaborative filtering approach combined with deep learning is proposed as a means to build the predictive model for applications’ performance on resources from previous observations, and an online resource allocation architecture utilizing the predictive model is presented. We also identify relevant research topics for further investigation.
Ma L, Zhang Z, Ko B, et al., 2018, Resource management in distributed SDN using reinforcement learning, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X
Distributed software-defined networking (SDN), which consists of multiple inter-connected network domains and each managed by one SDN controller, is an emerging networking architecture that offers balanced centralized/distributed control. Under such networking paradigm, resource management among various domains (e.g., optimal resource allocation) can be extremely challenging. This is because many tasks posted to the network require resources (e.g., CPU, memory, bandwidth, etc.) from different domains, where cross-domain resources are correlated, e.g., their feasibility depends on the existence of a reliable communication channel connecting them. To address this issue, we employ the reinforcement learning framework, targeting to automate this resource management and allocation process by proactive learning and interactions. Specifically, we model this issue as an MDP (Markov Decision Process) problem with different types of reward functions, where our objective is to minimize the average task completion time. Regarding this problem, we investigate the scenario where the resource status among controllers is fully synchronized. Under such scenario, the SDN controller has complete knowledge of the resource status of all domains, i.e., resource changes upon any policies are directly observable by controllers, for which Q-learning-based strategy is proposed to approach the optimal solution.
Tuor T, Wang S, Leung KK, et al., 2018, Understanding information leakage of distributed inference with deep neural networks: Overview of information theoretic approach and initial results, 9th Conference on Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR part of the SPIE Defense + Commercial Sensing Conference, Publisher: Proceedings of SPIE, ISSN: 0277-786X
With the emergence of Internet of Things (IoT) and edge computing applications, data is often generated by sensors and end users at the network edge, and decisions are made using these collected data. Edge devices often require cloud services in order to perform intensive inference tasks. Consequently, the inference of deep neural network (DNN) model is often partitioned between the edge and the cloud. In this case, the edge device performs inference up to an intermediate layer of the DNN, and offloads the output features to the cloud for the inference of the remaining of the network. Partitioning a DNN can help to improve energy efficiency but also rises some privacy concerns. The cloud platform can recover part of the raw data using intermediate results of the inference task. Recently, studies have also quantified an information theoretic trade-off between compression and prediction in DNNs. In this paper, we conduct a simple experiment to understand to which extent is it possible to reconstruct the raw data given the output of an intermediate layer, in other words, to which extent do we leak private information when sending the output of an intermediate layer to the cloud. We also present an overview of mutual-information based studies of DNN, to help understand information leakage and some potential ways to make distributed inference more secure.
Tuor T, Wang S, Leung KK, et al., 2018, Distributed Machine Learning in Coalition Environments: Overview of Techniques, 21st International Conference on Information Fusion (FUSION), Publisher: IEEE, Pages: 814-821
Zhang Z, Ma L, Leung KK, et al., 2018, Q-placement: Reinforcement-Learning-Based Service Placement in Software-Defined Networks, 38th IEEE International Conference on Distributed Computing Systems (ICDCS), Publisher: IEEE, Pages: 1527-1532, ISSN: 1063-6927
Zafari F, Li J, Leung KK, et al., 2018, A Game-Theoretic Approach to Multi-Objective Resource Sharing and Allocation in Mobile Edge Clouds, Technologies for the Wireless Edge Workshop (EdgeTech), Publisher: ASSOC COMPUTING MACHINERY, Pages: 9-13
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.