185 results found
Pesu T, Knottenbelt WJ, 2015, Dynamic Subtask Dispersion Reduction in Heterogeneous Parallel Queueing Systems, Electronic Notes in Theoretical Computer Science, Vol: 318, Pages: 129-142, ISSN: 1571-0661
Fork-join and split-merge queueing systems are mathematical abstractions of parallel task processing systems in which entering tasks are split into N subtasks which are served by a set of heterogeneous servers. The original task is considered completed once all the subtasks associated with it have been serviced. Performance of split-merge and fork-join systems are often quantified with respect to two metrics: task response time and subtask dispersion. Recent research effort has been focused on ways to reduce subtask dispersion, or the product of task response time and subtask dispersion, by applying delays to selected subtasks. Such delays may be pre-computed statically, or varied dynamically. Dynamic in our context refers to the ability to vary the delay applied to a subtask according to the state of the system, at any time before the service of that subtask has begun. We assume that subtasks in service cannot be preempted. A key dynamic optimisation that benefits both metrics of interest is to remove delays on any subtask with a sibling that has already completed service. This paper incorporates such a policy into existing methods for computing optimal subtask delays in split-merge and fork-join systems. In the context of two case studies, we show that doing so affects the optimal delays computed, and leads to improved subtask dispersion values when compared with existing techniques. Indeed, in some cases, it turns out to be beneficial to initially postpone the processing of non-bottleneck subtasks until the bottleneck subtask has completed service.
Chen X, Rupprecht L, Osman R, et al., 2015, CloudScope: diagnosing and managing performance interference in multi-tenant clouds, 23rd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Publisher: IEEE, Pages: 164-173, ISSN: 1526-7539
Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%.
Parson O, Fisher G, Hersey A, et al., 2015, Dataport and NILMTK: A Building Data Set Designed for Non-intrusive Load Monitoring, 3rd IEEE Global Conference on Signal and Information Processing (GlobalSIP), Publisher: IEEE, Pages: 210-214
Chen X, Knottenbelt WJ, 2015, A performance tree-based monitoring platform for clouds, Pages: 97-98
Copyright © 2015 ACM. Cloud-based software systems are expected to deliver reli- able performance under dynamic workload while eficiently managing resources. Conventional monitoring frameworks provide limited support for exible and intuitive performance queries. In this paper, we present a prototype monitor- ing and control platform for clouds that is a better fit to the characteristics of cloud computing (e.g. extensible, user- defined, scalable). Service Level Objectives (SLOs) are ex- pressed graphically as Performance Trees, while violated SLOs trigger mitigating control actions.
Bradley J, Knottenbelt W, Thomas N, 2015, Preface, Electronic Notes in Theoretical Computer Science, Vol: 310, Pages: 1-3, ISSN: 1571-0661
Kelly J, Knottenbelt WJ, 2015, Neural NILM: Deep Neural Networks Applied to Energy Disaggregation., Publisher: ACM, Pages: 55-64
Nika M, Wilding T, Fiems D, et al., 2015, Going Multi-viral: Synthedemic Modelling of Internet-based Spreading Phenomena., EAI Endorsed Trans. Ambient Syst., Vol: 2, Pages: e4-e4
, 2015, Computer Performance Engineering - 12th European Workshop, EPEW 2015, Madrid, Spain, August 31 - September 1, 2015, Proceedings, Publisher: Springer
Kelly J, Knottenbelt WJ, 2015, Neural NILM: Deep Neural Networks Applied to Energy Disaggregation., CoRR, Vol: abs/1507.06594
, 2015, 8th International Conference on Performance Evaluation Methodologies and Tools, VALUETOOLS 2014, Bratislava, Slovakia, December 9-11, 2014, Publisher: ICST
Wu H, Knottenbelt WJ, Wolter K, 2015, Analysis of the Energy-Response Time Tradeoff for Mobile Cloud Offloading Using Combined Metrics., Publisher: IEEE, Pages: 134-142
Huang W-C, Knottenbelt W, 2014, Self-Adaptive Containers: Interoperability Extensions and Cloud Integration, 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Publisher: IEEE
Kelly J, Knottenbelt W, 2014, Metadata for Energy Disaggregation, 2014 IEEE 38th International Computer Software and Applications Conference Workshops (COMPSACW), Publisher: IEEE
Batra N, Kelly J, Parson O, et al., 2014, NILMTK: an open source toolkit for non-intrusive load monitoring., e-Energy: Future energy systems, Publisher: ACM, Pages: 265-276
Tsimashenka I, Knottenbelt WJ, Harrison PG, 2014, Controlling variability in split-merge systems and its impact on performance, Annals of Operations Research, Vol: 239, Pages: 569-588, ISSN: 1572-9338
We consider split–merge systems with heterogeneous subtask service times and limited output buffer space in which to hold completed but as yet unmerged subtasks. An important practical problem in such systems is to limit utilisation of the output buffer. This can be achieved by judiciously delaying the processing of subtasks in order to cluster subtask completion times. In this paper we present a methodology to find those deterministic subtask processing delays which minimise any given percentile of the difference in times of appearance of the first and the last subtasks in the output buffer. Technically this is achieved in three main steps: firstly, we define an expression for the distribution of the range of samples drawn from nn independent heterogeneous service time distributions. This is a generalisation of the well-known order statistic result for the distribution of the range of nn samples taken from the same distribution. Secondly, we extend our model to incorporate deterministic delays applied to the processing of subtasks. Finally, we present an optimisation scheme to find that vector of delays which minimises a given percentile of the range of arrival times of subtasks in the output buffer. We show the impact of applying the optimal delays on system stability and task response time. Two case studies illustrate the applicability of our approach.
Chen X, Ho CP, Osman R, et al., 2014, Understanding, Modelling and Improving the Performance of Web Applications in Multi-core Virtualised Environments, th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Publisher: ACM Digital Library
Nika M, Fiems D, De Turck K, et al., 2014, Modelling Interacting Epidemics in Overlapping Populations, ANALYTICAL AND STOCHASTIC MODELLING TECHNIQUES AND APPLICATIONS, Vol: 8499, Pages: 33-45, ISSN: 0302-9743
NoSQL databases have emerged as a backend to support Big Data applications. NoSQL databases are characterized by horizontal scalability, schema-free data models, and easy cloud deployment. To avoid overprovisioning, it is essential to be able to identify the correct number of nodes required for a specific system before deployment. This paper benchmarks and compares three of the most common NoSQL databases: Cassandra, MongoDB and HBase. We deploy them on the Amazon EC2 cloud platform using different types of virtual machines and cluster sizes to study the effect of different configurations. We then compare the behavior of these systems to high-level queueing network models. Our results show that the models are able to capture the main performance characteristics of the studied databases and form the basis for a capacity planning tool for service providers and service users. © 2014 Springer International Publishing.
The modern world features a plethora of social, technological and biological epidemic phenomena. These epidemics now spread at unprecedented rates thanks to advances in industrialisation, transport and telecommunications. Effective real-time decision making and management of modern epidemic outbreaks depends on the two factors: the ability to determine epidemic parameters as the epidemic unfolds, and the ability to characterise rigorously the uncertainties inherent in these parameters. This paper presents a generic maximum- likelihoodbased methodology for online epidemic fitting of SIR models from a single trace which yields confidence intervals on parameter values. The method is fully automated and avoids the laborious manual efforts traditionally deployed in the modelling of biological epidemics. We present case studies based on both synthetic and real data. © 2014 Springer International Publishing.
Nika M, Wilding T, Fiems D, et al., 2014, Going multi-viral: Synthedemic modelling of internet-based spreading phenomena, Pages: 50-57
© Copyright 2015 ICST. Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, singleepidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine u data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.
Kelly J, Knottenbelt WJ, 2014, Metadata for Energy Disaggregation., CoRR, Vol: abs/1403.5946
Kelly J, Batra N, Parson O, et al., 2014, NILMTK v0.2: A Non-intrusive Load Monitoring Toolkit for Large Scale Data Sets., CoRR, Vol: abs/1409.5908
Kelly J, Batra N, Parson O, et al., 2014, NILMTK v0.2: a non-intrusive load monitoring toolkit for large scale data sets: demo abstract., Publisher: ACM, Pages: 182-183
Kelly J, Knottenbelt WJ, 2014, 'UK-DALE': A dataset recording UK Domestic Appliance-Level Electricity demand and whole-house demand., CoRR, Vol: abs/1404.0284
Tsimashenka I, Knottenbelt WJ, 2013, Trading off subtask dispersion and response time in split-merge systems, Pages: 431-442, ISSN: 0302-9743
In many real-world systems incoming tasks split into subtasks which are processed by a set of parallel servers. In such systems two metrics are of potential interest: response time and subtask dispersion. Previous research has been focused on the minimisation of one, but not both, of these metrics. In particular, in our previous work, we showed how the processing of selected subtasks can be delayed in order to minimise expected subtask dispersion and percentiles of subtask dispersion in the context of split-merge systems. However, the introduction of subtask delays obviously impacts adversely on task response time and maximum sustainable system throughput. In the present work, we describe a methodology for managing the trade off between subtask dispersion and task response time. The objective function of the minimisation is based on the product of expected subtask dispersion and expected task response time. Compared with our previous methodology, we show how our new technique can achieve comparable subtask dispersion with substantial improvements in expected task response time. © 2013 Springer-Verlag.
Osman R, Coulden D, Knottenbelt WJ, 2013, Performance modelling of concurrency control schemes for relational databases, Pages: 337-351, ISSN: 0302-9743
The performance of relational database systems is influenced by complex interdependent factors, which makes developing accurate models to evaluate their performance a challenging task. This paper presents a novel case study in which we develop a simple queueing Petri net model of a relational database system. The performance of the database system is evaluated for three different concurrency control schemes and compared to the results predicted by a queueing Petri net model. The results demonstrate the potential of our modelling approach in modelling database systems using relatively simple models that require minimal parameterization. Our models gave accurate approximations of the mean response times for shared and exclusive transactions with average prediction errors of 10% for high contention scenarios. © 2013 Springer-Verlag.
Tsimashenka I, Knottenbelt WJ, 2013, Reduction of subtask dispersion in fork-join systems, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 8168 LNCS, Pages: 325-336, ISSN: 0302-9743
Fork-join and split-merge queueing systems are well-known abstractions of parallel systems in which each incoming task splits into subtasks that are processed by a set of parallel servers. A task exits the system when all of its subtasks have completed service. Two key metrics of interest in such systems are task response time and subtask dispersion. This paper presents a technique applicable to a class of fork-join systems with heterogeneous exponentially distributed service times that is able to reduce subtask dispersion with only a marginal increase in task response time. Achieving this is challenging since the unsynchronised operation of fork-join systems naturally militates against low subtask dispersion. Our approach builds on our earlier research examining subtask dispersion and response time in split-merge systems, and involves the frequent application and updating of delays to the subtasks at the head of the parallel service queues. Numerical results show the ability to reduce dispersion in fork-join systems to levels comparable with or below that observed in all varieties of split-merge systems while retaining the response time and throughput benefits of a fork-join system. © 2013 Springer-Verlag Berlin Heidelberg.
Harrison PG, Hayden RA, Knottenbelt WJ, 2013, Product-forms in batch networks: Approximation and asymptotics, PERFORMANCE EVALUATION, Vol: 70, Pages: 822-840, ISSN: 0166-5316
Bar P, Benfredj R, Marks J, et al., 2013, Towards a monitoring feedback loop for cloud applications, Pages: 43-44
Performance monitoring is fundamental to track cloud application health and service-level agreement compliance, but with the emergence of multi-cloud deployments, it may become increasingly important also to create a feedback loop between runtime operation in multi-clouds and design-time reasoning. This is because the developer needs to acquire more information on the specific performance features of a cloud platform to better leverage its specificities. To support this goal, we have developed a set of open source components that extract quality-of-service (QoS) data from a target Java application using JMX, aggregate it in a time-series database, and finally deliver it in a prototype Java dashboard that may be integrated in a development environment, such as Eclipse, to display either live or historical QoS data. The architecture is not only limited to collection, aggregation, and display of QoS data, but it also allows the evaluation of hierarchical queries expressed using the Performance Trees graphical language. It is our intention that this will provide a cloud-independent uniform interface for developers to specify monitoring queries. Initial evaluation suggests that Cube on MongoDB provides appropriate scalability for this application.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.