Imperial College London

ProfessorWilliamKnottenbelt

Faculty of EngineeringDepartment of Computing

Professor of Applied Quantitative Analysis
 
 
 
//

Contact

 

+44 (0)20 7594 8331w.knottenbelt Website

 
 
//

Location

 

E363ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

240 results found

Malony AD, Karatza H, Knottenbelt W, McKee Set al., 2012, Topic 2: Performance prediction and evaluation, Pages: 52-53, ISSN: 0302-9743

In recent years, a range of novel methodologies and tools have been developed for the purpose of evaluation, design, and model reduction of existing and emerging parallel and distributed systems. At the same time, the coverage of the term 'performance' has constantly broadened to include reliability, robustness, energy consumption, and scalability in addition to classical performance-oriented evaluations of system functionalities. Indeed, the increasing diversification of parallel systems, from cloud computing to exascale, being fueld by technological advances, is placing greater emphasis on the methods and tools to address more comprehensive concerns. The aim of the Performance Prediction and Evaluation topic is to bring together system designers and researchers involved with the qualitative and quantitative evaluation and modeling of large-scale parallel and distributed applications and systems to focus on current critical areas of performance prediction and evaluation theory and practice. © 2012 Springer-Verlag.

Conference paper

Osman R, Knottenbelt WJ, 2012, Database system performance evaluation models: A survey, PERFORMANCE EVALUATION, Vol: 69, Pages: 471-493, ISSN: 0166-5316

Journal article

Tsimashenka I, Knottenbelt W, Harrison P, 2012, Controlling variability in split-merge systems, Pages: 165-177, ISSN: 0302-9743

We consider split-merge systems with heterogeneous subtask service times and limited output buffer space in which to hold completed but as yet unmerged subtasks. An important practical problem in such systems is to limit utilisation of the output buffer. This can be achieved by judiciously delaying the processing of subtasks in order to cluster subtask completion times. In this paper we present a methodology to find those deterministic subtask processing delays which minimise any given percentile of the difference in times of appearance of the first and the last subtasks in the output buffer. Technically this is achieved in three main steps: firstly, we define an expression for the distribution of the range of samples drawn from n independent heterogeneous service time distributions. This is a generalisation of the well-known order statistic result for the distribution of the range of n samples taken from the same distribution. Secondly, we extend our model to incorporate deterministic delays applied to the processing of subtasks. Finally, we present an optimisation scheme to find that vector of delays which minimises a given percentile of the range of arrival times of subtasks in the output buffer. A case study illustrates the applicability of our proposed approach. © 2012 Springer-Verlag.

Conference paper

Leung TLY, Knottenbelt WJ, 2012, Comparative Evaluation of Independent Private Values Distributions on Internet Auction Performance, International Journal of E-Entrepreneurship and Innovation, Vol: 3, Pages: 59-71, ISSN: 1947-8585

<p>The Independent Private Values (IPV) model is foundational for the analysis of Internet auction performance and is widely used in the study of auction behaviour. The characteristics of this model include the assumptions of privacy and independence where the value of the commodity in question is private to the individual buyers, and that different buyers do not know the values other buyers attached to the commodity. In addition, these values are drawn from a common distribution which is known to the buyers. In probabilistic terms, this essentially amounts to a series of values which are independent and identically distributed. The features and characteristics of the IPV distribution will have a significant impact on auction behaviour, and since a general stochastic analysis of their impact is analytically intractable, here auction performance is studied using an auction process simulator. Both hard close and soft close Internet auctions are studied. In addition, Vickrey auctions and auction mechanisms with multiple bid acceptance are compared and evaluated. From experimental findings, the paper establishes quantitative relationships between the different auction process parameters, deploy suitable IPV distributions to model the characteristics of different communities of bidders, provide suggestions for optimising auction performance, and recommend strategies for efficient auction design.</p>

Journal article

Knottenbelt W, 2012, Message from the general chair, Proceedings - 2012 9th International Conference on Quantitative Evaluation of Systems, QEST 2012

Journal article

Zhang X, Ai Z, Yarlagadda P, Kim YHet al., 2011, Preface, ISSN: 1022-6680

Conference paper

Tsimashenka I, Knottenbelt W, 2011, Reduction of variability in split-merge systems, Pages: 101-107

We consider an optimisation problem applicable to systems that can be represented as split-merge queueing networks with a limited buffer space for processed subtasks. We assume Poisson arrivals and generally distributed service times. The proposition is to reduce variability in terms of the difference in the times of arrival of the first and last subtasks in systems where the release times of the subtasks can be controlled. This stands in contrast to the overwhelming majority of research which is focused on reduction of mean response time or percentiles of response time. We formally define our notion of variability in split-merge systems and construct an associated cost function and optimisation problem. For two case studies we use simulation to explore the optimisation landscape and to solve the associated optimisation problem.

Conference paper

Anastasiou N, Knottenbelt W, Marin A, 2011, Automatic synchronisation detection in Petri Net performance models derived from location tracking data, Pages: 29-41, ISSN: 0302-9743

The inference of performance models from low-level location tracking traces provides a means to gain high-level insight into customer and/or resource flow in complex systems. In this context our earlier work presented a methodology for automatically constructing Petri Net performance models from location tracking data. However, the capturing of synchronisation between service centres - the natural expression of which is one of the most fundamental advantages of Petri nets as a modelling formalism - was not explicitly supported. In this paper, we introduce mechanisms for automatically detecting and incorporating synchronisation into our existing methodology. We present a case study based on synthetic location tracking data where the derived synchronisation detection mechanism is applied. © 2011 Springer-Verlag.

Conference paper

Tsirimpas P, Knottenbelt WJ, 2011, SPORTSBET: A tool for the quantitative evaluation and execution of betting exchange trading strategies, Pages: 155-156

Betting exchange markets, which offer person-to-person betting, have attracted increasing interest due to their similarities with financial markets and their growing economic importance. This paper presents SPORTSBET, an event-driven tool for the quantitative evaluation of betting exchange trading strategies. It was developed to specify, execute and back-test parameterised betting strategies for a wide range of sports. SPORTSBET allows the definition of betting strategies in an extended version of the UrbiScript language as sets of concurrent processes which make use of event-calculus-like operators. Strategy performance is quantified by synchronizing multiple real time or historical data streams with a dynamic market reconstruction. © 2011 IEEE.

Conference paper

Leung TLY, Knottenbelt WJ, 2011, Stochastic Modelling and Optimisation of Internet Auction Processes, Electronic Notes in Theoretical Computer Science, Vol: 275, Pages: 105-121, ISSN: 1571-0661

Internet auctions are an attractive mechanism for the exchange of goods at a non-fixed price point. The operation of these auctions can be run under a variety of parameters. In this paper, we provide a theoretical analysis of fixed time forward auctions in cases where a single bid or multiple bids are accepted in a single auction. A comparison of the economic benefits and the corresponding buyer and seller surpluses between the auctions where a single bid is accepted and the auctions where multiple bids are accepted is made. These models are verified through systematic simulation experiments, based on a series of operational assumptions, which characterise the arrival rate of bids, as well as the distribution from which the private values of buyers are sampled. Decision rules for optimising surplus under different auction fee structures are also given. © 2011 Elsevier B.V.

Journal article

Horng TC, Anastasiou N, Field T, Knottenbelt Wet al., 2011, LocTrackJINQS: An extensible location-aware simulation tool for multiclass queueing networks, Electronic Notes in Theoretical Computer Science, Vol: 275, Pages: 93-104, ISSN: 1571-0661

This paper presents LocTrackJINQS, a flexible and extensible spatio-temporal simulation tool for systems that involve the flow and processing of customers at multiple service centres. Developed based on the multi-class queueing network simulation package JINQS, LocTrackJINQS retains the abstract model specification power of JINQS while providing additional low-level models of entity movement. Besides traditional performance metrics, LocTrackJINQS produces as output a trace of each entity's location in the system over time. It can thus be used to generate synthetic location tracking data for location-based research or applications. © 2011 Elsevier B.V.

Journal article

Leung T, Knottenbelt W, 2011, The effect of private valuation on e-auction revenues, Pages: 229-242, ISSN: 1865-0929

In recent years, Internet auctions have become increasingly widespread. The Independent Private Values (IPV) model is widely used in the studying of auction behaviour and plays a fundamental role in many analyses of Internet auction performance. This model assumes privacy and independence, meaning that the private values of buyers are drawn from a common distribution, or in probabilistic terms, the series of values are independent and identically distributed. Since a general stochastic analysis is intractable, the IPV model has a significant impact on auction behaviour. In this paper, we study auction performance using an auction process simulator, considering both the hard close and soft close types of Internet auctions. From our experimental findings, we are able to establish quantitative relationships between the different auction process parameters, and also to deploy suitable IPV distributions in modelling the characteristics of different communities of bidders. © 2011 Springer-Verlag.

Conference paper

Franciosi F, Knottenbelt W, 2011, Data allocation strategies for the management of Quality of Service in Virtualised Storage Systems, ISSN: 2160-1968

The amount of data managed by organisations continues to grow relentlessly. Driven by the high costs of maintaining multiple local storage systems, there is a well established trend towards storage consolidation using multi-tier Virtualised Storage Systems (VSSs). At the same time, storage infrastructures are increasingly subject to stringent Quality of Service (QoS) demands. Within a VSS, it is challenging to match desired QoS with delivered QoS, considering the latter can vary dramatically both across and within tiers. Manual efforts to achieve this match require extensive and ongoing human intervention. This paper presents our work on the design and implementation of data allocation strategies in an enhanced version of the popular Linux Extended 3 Filesystem. This enhanced filesystem features support for the specification of QoS metadata while maintaining compatibility with stock kernels. We present new inode and datablock allocation strategies which seek to match the QoS attributes set by users and/or applications on files and directories with the QoS actually delivered by each of the filesystem's block groups. To create realistic test filesystems we have modified the Impressions benchmarking framework to support QoS metadata. The effectiveness of the resulting data allocation in terms of QoS matching is evaluated using a special kernel module that is capable of inspecting detailed filesystem allocation data onthe- fly. We show that our implementations of the proposed inode and datablock allocation strategies are capable of dramatically improving data placement with respect to QoS requirements when compared to the default allocators. © 2011 IEEE.

Conference paper

Leung TLY, Knottenbelt WJ, 2011, Consumer-to-Consumer Internet Auction Models, International Journal of Online Marketing, Vol: 1, Pages: 17-28, ISSN: 2156-1753

<p>Internet auctions have become an increasingly common method for exchanging goods and services across the world both among consumers themselves, as well as between businesses and consumers. These Internet auction mechanisms have the scope of incorporating procedures of much greater complexity and variety, and they exhibit characteristics and properties that are quite distinct from conventional auctions. In this paper, the authors provide an experimental study of the performance characteristics and operational behaviour of a number of online auction models, including the fixed time forward auctions, the Vickrey auctions, and models with soft close variable auction times. These online auction models are studied through systematic simulation experiments, based on a series of operational assumptions, which characterize the arrival rate of bids, as well as the distribution from which the private values of buyers are sampled. Suggestions for efficient online auction design and procedures for improving auction performance are given, and the behaviour of the average auction income and average auction duration are quantified and compared.</p>

Journal article

Lebrecht AS, Dingle NJ, Knottenbelt WJ, 2011, Analytical and Simulation Modelling of Zoned RAID Systems, COMPUTER JOURNAL, Vol: 54, Pages: 691-707, ISSN: 0010-4620

Journal article

Knottenbelt WJ, Lebrecht AS, Dingle NJ, 2011, Performance Models of Zoned Disk Arrays, Editors: Ivanyi, Topping, Publisher: Saxe-Coburg Publications, Pages: 105-134

Book chapter

Guenther MC, Dingle NJ, Bradley JT, Knottenbelt WJet al., 2011, Passage-time computation and aggregation strategies for large semi-Markov processes, PERFORMANCE EVALUATION, Vol: 68, Pages: 221-236, ISSN: 0166-5316

Journal article

Anastasiou N, Horng TC, Knottenbelt W, 2011, Deriving generalised stochastic Petri Net performance models from high-precision location tracking data, Pages: 91-100

Stochastic performance models have been widely used to analyse the performance and reliability of systems that involve the flow and processing of customers and/or resources with multiple service centres. However, the quality of performance analysis delivered by a model depends critically on the degree to which the model accurately represents the operations of the real system. This paper presents an automated technique which takes as input high-precision location tracking data - potentially collected from a real life system -and constructs a hierarchical Generalised Stochastic Petri Net performance model of the underlying system. We examine our method's effectiveness and accuracy through two case studies based on synthetic location tracking data. Copyright © 2011 ICST.

Conference paper

Al-Begain K, Fiems D, Knottenbelt W, 2010, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 6148 LNCS, ISSN: 0302-9743

Journal article

Thomas N, Bradley J, Knottenbelt W, Dingle N, Harder Uet al., 2010, Preface, Pages: 1-4, ISSN: 1571-0661

The special issue of Electronic Notes in Theoretical Computer Science contains papers presented at the Fourth International Workshop on the Practical Application of Stochastic Modelling (PASM), held at Imperial College London, in September 2009. The papers in the issue cover a broad range of research in the area of applied stochastic modeling and involve both applications and the theory to enable practical application of modeling techniques. Wolter and Reinecke use a phase-type queuing model to investigate the performance of service-oriented systems. Harrison and Massink investigate the use of stochastic process algebra to analyze a wide range of properties of ubiquitous systems. Ciocchetta and Hillston use the Bio-PEPA process algebra to investigate a class of epidemiological model. Piazolla and Gribaudo exploit recently developed results in mean field analysis to explore the highly novel application area of television and cinema production.

Conference paper

Mason AM, Dingle NJ, Knottenbelt WJ, Bell D, Buchanan W, Thuemmler Cet al., 2010, Modelling infection spread using location tracking, International Journal of Healthcare Technology and Management, Vol: 11, Pages: 442-461, ISSN: 1368-2156

The precision of location tracking technology has improved greatly over the last few decades. We aim to show that by tracking the locations of individuals in a closed environment, it is now possible to record the nature and frequency of interactions between them. Further, that it is possible to use such data to predict the way in which an infection will spread throughout such a population, given parameters such as transmission and recovery rates. We accordingly present a software package that is capable of recording and then replaying location data provided by a high-precision location tracking system. The software then employs a combination of SIR modelling and the epidemiological technique of contact tracing in order to predict the spread of an infection. We use this software to conduct a number of experiments using a sample data set, and compare the SIR graphs generated from these to similar graphs generated using the traditional SIR differential equations. Copyright © 2010 Inderscience Enterprises Ltd.

Journal article

, 2010, Analytical and Stochastic Modeling Techniques and Applications, 17th International Conference, ASMTA 2010, Cardiff, UK, June 14-16, 2010. Proceedings, Publisher: Springer

Conference paper

Kulatunga H, Knottenbelt WJ, Kadirkamanathan V, 2010, Adaptive Planning of Staffing Levels in Health Care Organisations, 2nd Congress on Electronic Healthcare for the 21st Century, Publisher: SPRINGER-VERLAG BERLIN, Pages: 88-+, ISSN: 1867-8211

Conference paper

Conrad J, Knottenbelt W, 2009, Message from the programme committee chairs, Proceedings - IEEE Computer Society's Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, MASCOTS, ISSN: 1526-7539

Journal article

Argent-Katwala A, Bradley J, Dingle N, Harder U, Knottenbelt Wet al., 2009, Performance Engineering, IET SOFTWARE, Vol: 3, Pages: 443-444, ISSN: 1751-8806

Journal article

, 2009, Proceedings of MASCOTS 2009, 17th Annual Meeting of the IEEE/ACM International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, MASCOTS 2009, International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Imperial College London, Publisher: IEEE Computer Society Press

Message from the Programme Committee Chairs\r\n\r\nOn behalf of the Organising and Programme Committee, it is our pleasure to present to you the proceedings of MASCOTS 2009, the IEEE Computer SocietyÆs 17th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunications Systems, which will be held in London. Our society is ever more dependent on the reliable and high performance operation of complex combinations of computer and communication technologies. As existing technologies evolve and new ones emerge, it remains critical to understand, predict and enhance system reliability and performance using stochastic models, simulation and analytical methods. Experimental studies are also needed to parameterise, calibrate and validate models against real-world observations. These are precisely the themes of the MASCOTS conference series. We are very pleased that this yearÆs conference attracted 162 submissions fromall over the world, many of which were of the highest quality. Such a large number of submissions implied a correspondingly high reviewing load, and we are very grateful to the Programme Committee members and many external reviewers who provided between three and six reviews for each submission.\r\n\r\nBased on the critical reviews of the reviewers and discussions in the Programme Committee, we accepted 32 extended papers of the highest quality, 22 high-quality regular papers and 21 posters. The accepted submissions were from 25 countries spanning five continents, included submissions with industrial co-authors from 7 different companies, and covered a diverse set of research areas (e.g. workload modelling, load management and scheduling, performance optimisation and reliability/availability modeling), and diverse application contexts (e.g. parallel and multicore systems, wireless networks and storage systems). The conference programme has been organised broadly to reflect these themes and includes invited keynote tal

Conference paper

, 2009, Practical Applications of Stochastic Modelling 2009, PASM 2009, Practical Applications of Stochastic Modelling, Imperial College London

Welcome to the Fourth International Workshop on Practical Applications of Stochastic Modelling, held in conjunction with MASCOTS. PASM is organised as a friendly and inclusive event at which to share interesting and novel applications and developments in modelling theory. We strive to maintain an air of informality to the workshop which we hope fosters discussion and potential collaboration. This volume is the æon-the-dayÆ proceedings for delegates only. In common with previous PASM workshops, the official proceedings will be published later in ElsevierÆs Electronic Notes in Theoretical Computer Science. If you wish to cite a paper from the workshop, please use the ENTCS citation which we will place on the PASM website when it is available. The workshop organisers would like to take this opportunity to extend their thanks to Michael Mislove, executive editor of ENTCS, for his continued support for PASM.\r\n\r\nThis is the fourth time PASM has been held, previous events were held in London in 2004, Newcastle in 2005, Palma de Mallorca in 2008. On each occasion PASM has been collocated with a different host conference. This year we are delighted to be associated with MASCOTS for the first time. We would like to thank Tony Field for inviting us and for his help in organising PASM. Jeremy, Will and Nigel would like to thank Nick for doing all the practical work. The organisation of an event like PASM is a team effort and weÆve learnt to choose our team carefully. Particular thanks must go to the programme committee, who thoroughly refereed all the submissions and made some difficult choices easier by their detailed comments.\r\n\r\nFinally we would like to thank you, the authors and delegates. Without you there would be no papers, no presentations, no questions, no discussion and consequently no PASM. This is your workshop, we hope you enjoy it!\r\n\r\nJeremy Bradley, Nick Dingle, William Knottenbelt and Nigel Thomas\r\nPASM Co-chairs.\r\n

Conference paper

Haggarty O, Knottenbelt WJ, Bradley JT, 2009, Distributed Response Time Analysis of GSPN Models with MapReduce, SIMULATION, Vol: 85, Pages: 497-509

Generalised Stochastic Petri nets (GSPNs) are widely used in the performance analysis of computer and communications systems. Response time densities and quantiles are often key outputs of such analysis. These can be extracted from a GSPN's underlying semi-Markov process using a method based on numerical Laplace transform inversion. This method typically requires the solution of thousands of systems of complex linear equations, each of rank n, where n is the number of states in the model. For large models substantial processing power is needed and the computation must therefore be distributed.\r\n\r\n This paper describes the implementation of a Response Time Analysis module for the Platform Independent Petri net Editor (PIPE2) which interfaces with Hadoop, an open source implementation of Google's MapReduce distributed programming environment, to provide distributed calculation of response time densities in GSPN models. The software is validated with analytically calculated results as well as simulated ones for larger models. Excellent scalability is shown.

Journal article

Guenther MC, Bradley JT, Dingle NJ, Knottenbelt WJet al., 2009, Truncation of Passage-time Calculations in Semi-Markov Models, UKPEW'09, 25th UK Performance Engineering Workshop, Pages: 17-28

Calculation of passage time distributions in large semi-Markov models can be accomplished by means of a previously-presented iterative algorithm, the core of which is repeated sparse matrix-vector multiplication. The algorithm's performance is therefore highly dependent on the number of multiplications of matrix and vector elements that must be performed during each iteration. At the same time, the products of matrix and vector elements that are very small contribute little to the overall result of the multiplication. In this paper, we investigate the effect of ignoring these values on both the performance and accuracy of the iterative passage time algorithm. We show that in the models we analyse here this truncation significantly reduces the number of multiplications which must be performed, and hence significantly reduces the running time of the algorithm, with little effect on the accuracy of the final result.

Conference paper

Guenther MC, Dingle NJ, Bradley JT, Knottenbelt WJet al., 2009, Aggregation strategies for large semi-Markov processes, 3rd International Symposium on Semi-Markov Models: Theory and Applications

High-level semi-Markov modelling paradigms such as semi-Markov stochastic Petri nets and process algebras are used to capture realistic performance models of computer and communication systems but often have the drawback of generating huge underlying semi-Markov processes. Extraction of performance measures such as steady-state probabilities and passage-time distributions therefore relies on sparse matrix-vector operations involving very large transition matrices. The computational complexity of such operations depends on the number of rows and non-zeros in the matrix.\r\n\r\nPrevious studies have shown that exact state-by-state aggregation of semi-Markov processes can be applied to reduce the number of states. However, it is important to select the order in which states are aggregated judiciously so as to avoid a dramatic increase in matrix density caused by the creation of additional transitions between remaining states. Our paper addresses this issue by presenting the concept of state space partitioning for aggregation. Partitioning the state space entails dividing it into a number of non-intersecting subsets. In contrast to previous algorithms that perform state-by-state aggregation on the global unpartitioned graph, our new algorithm works on a local partition-by-partition basis which allows more space-efficient aggregation. We introduce different partitioning methods for this purpose. Furthermore we discuss partition sorting methods that determine the order in which partitions should be aggregated. This order has a significant impact on the connectivity of the aggregate state space, and thus the density of the transition matrix.\r\n\r\nAggregation of partitions can be done in one of two ways. The first way is to use exact state-by-state aggregation to aggregate each individual state within a partition. For this purpose, we present a new state ordering algorithm, which takes into account the exact number of new transitions that are created when aggregating a pa

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00159431&limit=30&person=true&page=5&respub-action=search.html