Imperial College London

ProfessorWilliamKnottenbelt

Faculty of EngineeringDepartment of Computing

Professor of Applied Quantitative Analysis
 
 
 
//

Contact

 

+44 (0)20 7594 8331w.knottenbelt Website

 
 
//

Location

 

E363ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

246 results found

Forshaw M, Knottenbelt W, Thomas N, Wolter Ket al., 2018, Preface, Electronic Notes in Theoretical Computer Science, Vol: 337, Pages: 1-3, ISSN: 1571-0661

Journal article

Zamyatin A, Stifter N, Schindler P, Weippl E, Knottenbelt Wet al., 2018, Flux: revisiting near blocks for proof-of-work blockchains, Cryptology ePrint Archive: Report 2018/415

The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.

Journal article

Wolter K, Knottenbelt W, 2018, Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE 2018, Berlin, Germany, April 09-13, 2018, 2018 ACM/SPEC International Conference on Performance Engineering, ICPE, Publisher: ACM

Conference paper

Wolter K, Knottenbelt W, 2018, Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE 2018, Berlin, Germany, April 09-13, 2018, 2018 ACM/SPEC International Conference on Performance Engineering, ICPE, Publisher: ACM

Conference paper

Zamyatin A, Stifter N, Judmayer A, Schindler P, Weippl E, Knottenbelt Wet al., 2018, (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice, 5th Workshop on Bitcoin and Blockchain Research at Financial Cryptography and Data Security 2018

The loosely defined terms hard fork and soft fork have establishedthemselves as descriptors of different classes of upgrade mechanisms for the underlying consensus rules of (proof-of-work) blockchains. Recently, a novel approach termed velvet fork, which expands upon the concept of a soft fork, was outlined. Specifically, velvet forks intend to avoid the possibility of disagreement by a change of rules through rendering modifications to the protocol backward compatible and inclusive to legacy blocks.We present an overview and definitions of these different upgrade mechanisms and outline their relationships. Hereby, we expose examples where velvet forks or similar constructions are already actively employed in Bitcoin and other cryptocurrencies. Furthermore, we expand upon the concept of velvet forks by proposing possible applications and discuss potentially arising security implications.

Conference paper

Zamyatin A, Harz D, Knottenbelt WJ, 2018, Issue, Trade, Redeem: Crossing Systems Bounds with Cryptocurrency-Backed Tokens., IACR Cryptology ePrint Archive, Vol: 2018, Pages: 643-643

Journal article

, 2018, Proceedings of the Ninth International Workshop on the Practical Application of Stochastic Modelling, PASM 2017, Berlin, Germany, September 9, 2017, Publisher: Elsevier

Conference paper

Pesu T, Kettunen J, Knottenbelt WJ, Wolter Ket al., 2017, Three-way optimisation of response time, subtask dispersion and energy consumption in split-merge systems, VALUETOOLS 2017: 11th EAI International Conference on Performance Evaluation Methodologies and Tools, Publisher: ACM

This paper investigates various ways in which the triple trade-off metrics between task response time, subtask dispersion and energy can be improved in split-merge queueing systems. Four ideas, namely dynamic subtask dispersion reduction, state-dependent service times, multiple redundant subtask service servers and restarting subtask service, are examined in the paper. It transpires that all four techniques can be used to improve the triple trade-off, while combinations of the techniques are not necessarily beneficial.

Conference paper

Zamyatin A, Wolter K, Werner S, Mulligan CEA, Harrison PG, Knottenbelt WJet al., 2017, Swimming with fishes and sharks: beneath the surface of queue-based ethereum mining pools, 25th Annual Meeting of the IEEE International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Publisher: IEEE

Cryptocurrency mining can be said to be the modernalchemy, involving as it does the transmutation of electricityinto digital gold. The goal of mining is to guess the solutionto a cryptographic puzzle, the difficulty of which is determinedby the network, and thence to win the block reward andtransaction fees. Because the return on solo mining has a veryhigh variance, miners band together to create so-called miningpools. These aggregate the power of several individual miners,and, by distributing the accumulated rewards according to somescheme, ensure a more predictable return for participants.In this paper we formulate a model of the dynamics of a queue-based reward distribution scheme in a popular Ethereum miningpool and develop a corresponding simulation. We show that theunderlying mechanism disadvantages miners with above-averagehash rates. We then consider two-miner scenarios and show howlarge miners may perform attacks to increase their profits at theexpense of other participants of the mining pool. The outcomes ofour analysis show the queue-based reward scheme is vulnerableto manipulation in its current implementation.

Conference paper

Mora SV, Knottenbelt WJ, 2017, Deep learning for domain-specific action recognition in tennis, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Publisher: IEEE, Pages: 170-178, ISSN: 2160-7508

Recent progress in sports analytics has been driven by the availability of spatio-temporal and high level data. Video-based action recognition in sports can significantly contribute to these advances. Good progress has been made in the field of action recognition but its application to sports mainly focuses in detecting which sport is being played. In order for action recognition to be useful in sports analytics a finer-grained action classification is needed. For this reason we focus on the fine-grained action recognition in tennis and explore the capabilities of deep neural networks for this task. In our model, videos are represented as sequences of features, extracted using the well-known Inception neural network, trained on an independent dataset. Then a 3-layered LSTM network is trained for the classification. Our main contribution is the proposed neural network architecture that achieves competitive results in the challenging THETIS dataset, comprising videos of tennis actions.

Conference paper

Pesu T, Knottenbelt WJ, 2017, Optimising hidden stochastic PERT networks, 10th EAI International Conference on Performance Evaluation Methodologies and Tools, Publisher: EAI, Pages: 133-136

This paper introduces a technique for minimising subtask dispersion in hidden stochastic PERT networks. The technique improves on existing research in two ways. Firstly, it enables subtask dispersion reduction in DAG structures, whereas previous techniques have only been applicable to single-layer split-merge or fork-join systems. Secondly, the exact distributions of subtask processing times do not need to be known, so long as there is some means of generating samples. The technique is further extended to use a metric which trades off subtask dispersion and task response time.

Conference paper

Haverkort B, Knottenbelt W, Remke A, Thomas Net al., 2016, Preface, Electronic Notes in Theoretical Computer Science, Vol: 327, Pages: 1-3, ISSN: 1571-0661

Journal article

Haughian G, Osman R, Knottenbelt WJ, 2016, Benchmarking replication in cassandra and MongoDB NoSQL datastores, 27th International Conference, DEXA 2016, Publisher: Springer, Pages: 152-166, ISSN: 0302-9743

The proliferation in Web 2.0 applications has increased the volume, velocity, and variety of data sources which have exceeded the limitations and expected use cases of traditional relational DBMSs. Cloud serving NoSQL data stores address these concerns and provide replication mechanisms to ensure fault tolerance, high availability, and improved scalability. In this paper, we empirically explore the impact of replication on the performance of Cassandra and MongoDB NoSQL datastores. We evaluate the impact of replication in comparison to non-replicated clusters of equal size hosted on a private cloud environment. Our benchmarking experiments are conducted for read and write heavy workloads subject to different access distributions and tunable consistency levels. Our results demonstrate that replication must be taken into consideration in empirical and modelling studies in order to achieve an accurate evaluation of the performance of these datastores.

Conference paper

Wu H, Knottenbelt W, Wolter K, Sun Yet al., 2016, An optimal offloading partitioning algorithm in mobile cloud computing, 13th International Conference, QEST 2016, Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 311-328, ISSN: 0302-9743

Application partitioning splits the executions into local and remote parts. Through optimal partitioning, the device can obtain the most benefit from computation offloading. Due to unstable resources at the wireless network (bandwidth fluctuation, network latency, etc.) and at the service nodes (different speed of the mobile device and cloud server, memory, etc.), static partitioning solutions in previous work with fixed bandwidth and speed assumptions are unsuitable for mobile offloading systems. In this paper, we study how to effectively and dynamically partition a given application into local and remote parts, while keeping the total cost as small as possible. We propose a novel min-cost offloading partitioning (MCOP) algorithm that aims at finding the optimal partitioning plan (determine which portions of the application to run on mobile devices and which portions on cloud servers) under different cost models and mobile environments. The simulation results show that the proposed algorithm provides a stable method with low time complexity which can significantly reduce execution time and energy consumption by optimally distributing tasks between mobile devices and cloud servers, and in the meantime, it can well adapt to environmental changes, such as network perturbation.

Conference paper

McGinn D, Birch DA, Akroyd D, Molina-Solana M, Guo Y, Knottenbelt Wet al., 2016, Visualizing Dynamic Bitcoin Transaction Patterns, Big Data, Vol: 4, Pages: 109-119, ISSN: 2167-647X

This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

Journal article

Harrison PG, Patel NM, Knottenbelt, 2016, Energy–performance trade-offs via the EP-queue, ACM Transactions on Modeling and Performance Evaluation of Computing Systems, Vol: 1, ISSN: 2376-3647

We introduce the EP queue -- a significant generalization of the MB/G/1 queue that has state-dependent service time probability distributions and incorporates power-up for first arrivals and power-down for idle periods. We derive exact results for the busy-time and response-time distributions. From these, we derive power consumption metrics during nonidle periods and overall response time metrics, which together provide a single measure of the trade-off between energy and performance. We illustrate these trade-offs for some policies and show how numerical results can provide insights into system behavior. The EP queue has application to storage systems, especially hard disks, and other data-center components such as compute servers, networking, and even hyperconverged infrastructure.

Journal article

Kelly J, Knottenbelt WJ, 2016, Does disaggregated electricity feedback reduce domestic electricity consumption? A systematic review of the literature, CoRR, Vol: abs/1605.00962

We examine 12 studies on the efficacy of disaggregated energy feedback. The average electricity reduction across these studies is 4.5%. However, 4.5% may be a positively-biased estimate of the savings achievable across the entire population because all 12 studies are likely to be prone to opt-in bias hence none test the effect of disaggregated feedback on the general population. Disaggregation may not be required to achieve these savings: Aggregate feedback alone drives 3% reductions; and the 4 studies which directly compared aggregate feedback against disaggregated feedback found that aggregate feedback is at least as effective as disaggregated feedback, possibly because web apps are viewed less often than in-home-displays (in the short-term, at least) and because some users do not trust fine-grained disaggregation (although this may be an issue with the specific user interface studied). Disaggregated electricity feedback may help a motivated sub-group of the population to save more energy but fine-grained disaggregation may not be necessary to achieve these energy savings. Disaggregation has many uses beyond those discussed in this paper but, on the specific question of promoting energy reduction in the general population, there is no robust evidence that current forms of disaggregated energy feedback are more effective than aggregate energy feedback. The effectiveness of disaggregated feedback may increase if the general population become more energy-conscious (e.g. if energy prices rise or concern about climate change deepens); or if users' trust in fine-grained disaggregation improves; or if innovative new approaches or alternative disaggregation strategies (e.g. disaggregating by behaviour rather than by appliance) out-perform existing feedback. We also discuss opportunities for new research into the effectiveness of disaggregated feedback.

Journal article

Parson O, Fisher G, Hersey A, Batra N, Kelly J, Singh A, Knottenbelt W, Rogers Aet al., 2016, Dataport and NILMTK: A building data set designed for non-intrusive load monitoring, 3rd IEEE Global Conference on Signal and Information Processing (GlobalSIP), Publisher: IEEE, Pages: 210-214

Non-intrusive load monitoring (NILM), or energy disaggregation, is the process of using signal processing and machine learning to separate the energy consumption of a building into individual appliances. In recent years, a number of data sets have been released in order to evaluate such approaches, which contain both building-level and appliance-level energy data. However, these data sets typically cover less than 10 households due to the financial cost of such deployments, and are not released in a format which allows the data sets to be easily used by energy disaggregation researchers. To this end, the Dataport database was created by Pecan Street Inc, which contains 1 minute circuit-level and building-level electricity data from 722 households. Furthermore, the non-intrusive load monitoring toolkit (NILMTK) was released in 2014, which provides software infrastructure to support energy disaggregation research, such as data set parsers, benchmark disaggregation algorithms and accuracy metrics. This paper describes the release of a subset of the Dataport database in NILMTK format, containing one month of electricity data from 669 households. Through the release of this Dataport data in NILMTK format, we pose a challenge to the signal processing community to produce energy disaggregation algorithms which are both accurate and scalable.

Conference paper

, 2016, VALUETOOLS'15: Proceedings of the 9th EAI International Conference on Performance Evaluation Methodologies and Tools, Berlin, Germany, December 14-16, 2015, Publisher: ACM

Conference paper

, 2016, 8th International Workshop on Practical Application of Stochastic Modeling, PASM 2016, Münster, Germany, April 2016, Publisher: Elsevier

Conference paper

Pesu T, Knottenbelt WJ, 2015, Dynamic Subtask Dispersion Reduction in Heterogeneous Parallel Queueing Systems, Electronic Notes in Theoretical Computer Science, Vol: 318, Pages: 129-142, ISSN: 1571-0661

Fork-join and split-merge queueing systems are mathematical abstractions of parallel task processing systems in which entering tasks are split into N subtasks which are served by a set of heterogeneous servers. The original task is considered completed once all the subtasks associated with it have been serviced. Performance of split-merge and fork-join systems are often quantified with respect to two metrics: task response time and subtask dispersion. Recent research effort has been focused on ways to reduce subtask dispersion, or the product of task response time and subtask dispersion, by applying delays to selected subtasks. Such delays may be pre-computed statically, or varied dynamically. Dynamic in our context refers to the ability to vary the delay applied to a subtask according to the state of the system, at any time before the service of that subtask has begun. We assume that subtasks in service cannot be preempted. A key dynamic optimisation that benefits both metrics of interest is to remove delays on any subtask with a sibling that has already completed service. This paper incorporates such a policy into existing methods for computing optimal subtask delays in split-merge and fork-join systems. In the context of two case studies, we show that doing so affects the optimal delays computed, and leads to improved subtask dispersion values when compared with existing techniques. Indeed, in some cases, it turns out to be beneficial to initially postpone the processing of non-bottleneck subtasks until the bottleneck subtask has completed service.

Journal article

Kelly J, Knottenbelt W, 2015, Neural NILM: Deep neural networks applied to energy disaggregation, BuildSys 2015 - Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built, Pages: 55-64

Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called 'long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house.

Journal article

Kelly J, Knottenbelt WJ, 2015, Neural NILM: deep neural networks applied to energy disaggregation, BuildSys 2015, Publisher: ACM, Pages: 55-64

Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house.

Conference paper

Huang W-C, Knottenbelt W, 2015, Self-adaptive containers: interoperability extensions and cloud integration, 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Publisher: IEEE

Driven by an ever-increasing diversity of application contexts, execution environments and scalability requirements, modern software is faced with the challenge of frequent code refactoring. To address this, we have proposed an STL-like self-adaptive container library, which dynamically changes its data structures and resource usage to meet programmer-specified Service Level Objectives relating to performance, reliability and primary memory use. A prototype of this library has been implemented and utilised in two case studies to prove its viability. In the present work, we explore a low-cost means to extend our library to satisfy wider classes of Service Level Objectives. This is achieved through the integration of third-party container frameworks, which exploit parallelism to boost performance and disk-based data offloading to reduce primary memory consumption, and the integration of cloud storage services, which offer cost-effective location-free storage. We demonstrate our library's application in a state-space exploration case study. With very low programmer overhead, experimental results show that our library can improve performance with a 76% reduction in insertion time and an 86% reduction in search time, and can also exploit out-of-core storage, including cloud storage.

Conference paper

Chen X, Rupprecht L, Osman R, Pietzuch P, Franciosi F, Knottenbelt Wet al., 2015, CloudScope: diagnosing and managing performance interference in multi-tenant clouds, 23rd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Publisher: IEEE, Pages: 164-173, ISSN: 1526-7539

Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%.

Conference paper

Wu H, Knottenbelt W, Wolter K, 2015, Analysis of the Energy-Response Time Tradeoff for Mobile Cloud Offloading Using Combined Metrics, Pages: 134-142

Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the f

Conference paper

Kelly J, Knottenbelt W, 2015, The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes, Scientific Data, Vol: 2

Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the 'ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.

Journal article

Nika M, Wilding T, Fiems D, De Turck K, Knottenbelt Wet al., 2015, Going multi-viral: synthedemic modelling of internet-based spreading phenomena, 8th International Conference on Performance Evaluation Methodologies and Tools, Publisher: ICST, Pages: 50-57

Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, single-epidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine flu data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.

Conference paper

Beltrán M, Knottenbelt W, 2015, Preface, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 9272, ISSN: 0302-9743

Journal article

Beltrán M, Knottenbelt W, Bradley J, 2015, Computer Performance Engineering: 12th European Workshop, EPEW 2015 Madrid, Spain, August 31 - September 1, 2015 Proceedings, ISSN: 0302-9743

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00159431&limit=30&person=true&page=3&respub-action=search.html