179 results found
Harz D, Gudgeon L, Gervais A, et al., Balance: dynamic adjustment of cryptocurrency deposits, 2019 ACM SIGSAC Conference on Computer & Communications Security (CCS '19), Publisher: ACM
In cryptoeconomic protocols, nancial deposits are fundamental totheir security. Protocol designers and their agents face a trade-owhen choosing the deposit size. While substantial deposits might in-crease the protocol security, for example by minimising the impactof adversarial behaviour or risks of currency uctuations, locked-up capital incurs opportunity costs for agents. Moreover, someprotocols require over-collateralization in anticipation of futureevents and malicious intentions of agents. We presentBalance,an application-agnostic system that reduces over-collateralizationwithout compromising protocol security. InBalance, maliciousagents receive no additional utility for cheating once their depositsare reduced. At the same time, honest and rational agents increasetheir utilities for behaving honestly as their opportunity costs forthe locked-up deposits are reduced.Balanceis a round-basedmechanism in which agents need tocontinuouslyperform desiredactions. Rather than treating agents’ incentives and behaviour asancillary, we explicitly model agents’ utility, proving the conditionsfor incentive compatibility.Balanceimproves social welfare givena distribution of honest, rational, and malicious agents. Further,we integrateBalancewith a cross-chain interoperability protocol,XCLAIM, reducing deposits by 10% while maintaining the sameutility for behaving honestly. Our implementation allows any num-ber of agents to be maintained for at most 55,287 gas (≈USD 0.07)to update the agents’ scores, and at a cost of 54,948 gas (≈USD0.07) to update the assignment of agents to layers.
Koutsouri A, Poli F, Alfieri E, et al., Balancing cryptoassets and gold: a weighted-risk-contribution index for the alternative asset space, 1st International Conference on Mathematical Research for Blockchain Economy, Publisher: Springer Verlag, ISSN: 0302-9743
Bitcoin is foremost amongst the emerging asset class knownas cryptoassets. Two noteworthy characteristics of the returns of non-stablecoin cryptoassets are their high volatility, which brings with it ahigh level of risk, and their high intraclass correlation, which limits thebenefits that can be had by diversifying across multiple cryptoassets. Yetcryptoassets exhibit no correlation with gold, a highly-liquid yet scarceasset which has proved to function as a safe haven during crises affectingtraditional financial systems. As exemplified by Shannon’s Demon, a lackof correlation between assets opens the door to principled risk controlthrough so-called volatility harvesting involving periodic rebalancing.In this paper we propose an index which combines a basket of five cryp-toassets with an investment in gold in a way that aims to improve therisk profile of the resulting portfolio while preserving its independencefrom mainstream financial asset classes such as stocks, bonds and fiatcurrencies. We generalise the theory of Equal Risk Contribution to allowfor weighting according to a desired level of contribution to volatility. Wefind a crypto–gold weighting based on Weighted Risk Contribution to behistorically more effective in terms of Sharpe Ratio than several alterna-tive asset allocation strategies including Shannon’s Demon. Within thecrypto-basket, whose constituents are selected and rebalanced monthly,we find an Equal Weighting scheme to be more effective in terms of thesame metric than a market capitalisation weighting.
Seakhoa-King S, Balaji P, Alvarez NT, et al., 2019, Revenue-Driven Scheduling in Drone Delivery Networks with Time-sensitive Service Level Agreements, 12th EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS), Publisher: ASSOC COMPUTING MACHINERY, Pages: 183-186
Wu H, Knottenbelt W, Wolter K, 2019, An efficient application partitioning algorithm in mobile environments, IEEE Transactions on Parallel and Distributed Systems, ISSN: 1045-9219
Application partitioning that splits the executions into local and remote parts, plays a critical role in high-performance mobile offloading systems. Mobile devices can obtain the most benefit from Mobile Cloud Computing (MCC) or Mobile Edge Computing (MEC) through optimal partitioning. Due to unstable resources at the wireless network (network disconnection, bandwidth fluctuation, network latency, etc.) and at the service nodes (different speeds of mobile devices and cloud/edge servers, memory, etc.), static partitioning solutions with fixed bandwidth and speed assumptions are unsuitable for offloading systems. In this paper, we study how to dynamically partition a given application into local and remote parts effectively, while keeping the total cost as small as possible. For general tasks (i.e., arbitrary topological consumption graphs), we propose a Min-Cost Offloading Partitioning (MCOP) algorithm that aims at finding the optimal partitioning plan (determine which portions of the application to run on mobile devices and which portions on cloud/edge servers) under different cost models and mobile environments. Simulation results show that the MCOP algorithm provides a stable method with low time complexity which significantly reduces execution time and energy consumption by optimally distributing tasks between mobile devices and servers, besides it well adapts to mobile environmental changes.
Werner S, Pritz P, Zamyatin A, et al., Uncle traps: harvesting rewards in a queue-based ethereumMining Pool, 12th EAI International Conference on Performance Evaluation Methodologies and Tools, Publisher: ACM
Mining pools in Proof-of-Work cryptocurrencies allow miners topool their computational resources as a means of reducing payoutvariance. In Ethereum,uncle blocksare valid Proof-of-Work solu-tions which do not become the head of the blockchain, yet yieldrewards if later referenced by main chain blocks. Mining pool opera-tors are faced with the non-trivial task of fairly distributing rewardsfor both block types among pool participants.Inspired by empirical observations, we formally reconstruct aSybil attack exploiting the uncle block distribution policy in a queue-based mining pool. To ensure fairness of the queue-based payoutscheme, we propose a mitigation. We examine the effectiveness ofthe attack strategy under the current and the proposed policy via adiscrete-event simulation. Our findings show that the observed attackcan indeed be obviated by altering the current reward scheme.
Zamyatin A, Harz D, Lind J, et al., 2018, XCLAIM: decentralized, interoperable, cryptocurrency-backed assets, 40th IEEE Symposium on Security and Privacy (IEEE S&P 2019), Publisher: IEEE
Building trustless cross-blockchain trading protocols is challenging. Centralized exchanges thus remain the preferred route to execute transfers across blockchains. However, these services require trust and therefore undermine the very nature of the blockchains on which they operate. To overcome this,several decentralized exchanges have recently emerged which offer support for atomic cross-chain swaps (ACCS). ACCS enable the trustless exchange of cryptocurrencies across blockchains,and are the only known mechanism to do so. However, ACCS suffer significant limitations; they are slow, inefficient and costly,meaning that they are rarely used in practice.We present XCLAIM: the first generic framework for achieving trustless and efficient cross-chain exchanges using cryptocurrency-backed assets(CBAs). XCLAIM offers protocols for issuing,transferring, swapping and redeeming CBAs securely in anon-interactive manner on existing blockchains. We instanti-ate XCLAIM between Bitcoin and Ethereum and evaluate our implementation; it costs less than USD 0.50 to issue an arbi-trary amount of Bitcoin-backed tokens on Ethereum. We show XCLAIMis not only faster, but also significantly cheaper than atomic cross-chain swaps. Finally, XCLAIMis compatible with the majority of existing blockchains without modification, and enables several novel cryptocurrency applications, such as cross-chain payment channels and efficient multi-party swaps
Stewart I, Illie D, Zamyatin A, et al., Committing to Quantum Resistance: A Slow Defence for Bitcoin against a Fast Quantum Computing Attack, Royal Society Open Science, ISSN: 2054-5703
Quantum computers are expected to have a dramatic impact on numerous fields, due to their anticipated ability to solve classes of mathematical problems much more efficiently than their classical counterparts. This particularly applies to domains involving integer factorisation and discrete logarithms, such as public key cryptography. In this paper we consider the threats a quantum-capable adversary could impose on Bitcoin, which currently uses the Elliptic Curve Digital Signature Algorithm (ECDSA) to sign transactions. We then propose a simple but slow commit-delay-reveal protocol, which allows users to securely move their funds from old (non-quantum-resistant) outputs to those adhering to a quantum-resistant digital signature scheme. The transition protocol functions even if ECDSA has already been compromised. While our scheme requires modifications to the Bitcoin protocol, these can be implemented as a soft fork.
Zamyatin A, Stifter N, Schindler P, et al., 2018, Flux: revisiting near blocks for proof-of-work blockchains, Cryptology ePrint Archive: Report 2018/415
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.
Wolter K, Knottenbelt W, 2018, Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE 2018, Berlin, Germany, April 09-13, 2018, 2018 ACM/SPEC International Conference on Performance Engineering, ICPE, Publisher: ACM
Wolter K, Knottenbelt W, 2018, Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, ICPE 2018, Berlin, Germany, April 09-13, 2018, 2018 ACM/SPEC International Conference on Performance Engineering, ICPE, Publisher: ACM
Zamyatin A, Stifter N, Judmayer A, et al., (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice, 5th Workshop on Bitcoin and Blockchain Research at Financial Cryptography and Data Security 2018
The loosely defined terms hard fork and soft fork have establishedthemselves as descriptors of different classes of upgrade mechanisms for the underlying consensus rules of (proof-of-work) blockchains. Recently, a novel approach termed velvet fork, which expands upon the concept of a soft fork, was outlined. Specifically, velvet forks intend to avoid the possibility of disagreement by a change of rules through rendering modifications to the protocol backward compatible and inclusive to legacy blocks.We present an overview and definitions of these different upgrade mechanisms and outline their relationships. Hereby, we expose examples where velvet forks or similar constructions are already actively employed in Bitcoin and other cryptocurrencies. Furthermore, we expand upon the concept of velvet forks by proposing possible applications and discuss potentially arising security implications.
Harz D, Knottenbelt W, 2018, Towards Safer Smart Contracts: A Survey of Languages and Verification Methods
With a market capitalisation of over USD 205 billion in just under ten years,public distributed ledgers have experienced significant adoption. Apart fromnovel consensus mechanisms, their success is also accountable to smartcontracts. These programs allow distrusting parties to enter agreements thatare executed autonomously. However, implementation issues in smart contractscaused severe losses to the users of such contracts. Significant efforts aretaken to improve their security by introducing new programming languages andadvance verification methods. We provide a survey of those efforts in twoparts. First, we introduce several smart contract languages focussing onsecurity features. To that end, we present an overview concerning paradigm,type, instruction set, semantics, and metering. Second, we examine verificationtools and methods for smart contract and distributed ledgers. Accordingly, weintroduce their verification approach, level of automation, coverage, andsupported languages. Last, we present future research directions includingformal semantics, verified compilers, and automated verification.
Kurpas D, 2018, Preface, FAMILY MEDICINE AND PRIMARY CARE REVIEW, Vol: 20, ISSN: 1734-3402
Zamyatin A, Harz D, Knottenbelt WJ, 2018, Issue, Trade, Redeem: Crossing Systems Bounds with Cryptocurrency-Backed Tokens., IACR Cryptology ePrint Archive, Vol: 2018, Pages: 643-643
Pesu T, Kettunen J, Knottenbelt WJ, et al., 2017, Three-way optimisation of response time, subtask dispersion and energy consumption in split-merge systems, Pages: 244-251
© 2017 ACM. This paper investigates various ways in which the triple trade-off metrics between task response time, subtask dispersion and energy can be improved in split-merge queueing systems. Four ideas, namely dynamic subtask dispersion reduction, state-dependent service times, multiple redundant subtask service servers and restarting subtask service, are examined in the paper. It transpires that all four techniques can be used to improve the triple trade-off, while combinations of the techniques are not necessarily beneficial.
Mora SV, Knottenbelt WJ, 2017, Deep Learning for Domain-Specific Action Recognition in Tennis, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Publisher: IEEE, Pages: 170-178, ISSN: 2160-7508
Zamyatin A, Wolter K, Werner S, et al., Swimming with fishes and sharks: beneath the surface of queue-based ethereum mining pools, 25th Annual Meeting of the IEEE International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Publisher: IEEE
Cryptocurrency mining can be said to be the modernalchemy, involving as it does the transmutation of electricityinto digital gold. The goal of mining is to guess the solutionto a cryptographic puzzle, the difficulty of which is determinedby the network, and thence to win the block reward andtransaction fees. Because the return on solo mining has a veryhigh variance, miners band together to create so-called miningpools. These aggregate the power of several individual miners,and, by distributing the accumulated rewards according to somescheme, ensure a more predictable return for participants.In this paper we formulate a model of the dynamics of a queue-based reward distribution scheme in a popular Ethereum miningpool and develop a corresponding simulation. We show that theunderlying mechanism disadvantages miners with above-averagehash rates. We then consider two-miner scenarios and show howlarge miners may perform attacks to increase their profits at theexpense of other participants of the mining pool. The outcomes ofour analysis show the queue-based reward scheme is vulnerableto manipulation in its current implementation.
Pesu T, Knottenbelt WJ, 2017, Optimising hidden stochastic PERT networks, 10th EAI International Conference on Performance Evaluation Methodologies and Tools, Publisher: EAI, Pages: 133-136
This paper introduces a technique for minimising subtask dispersion in hidden stochastic PERT networks. The technique improves on existing research in two ways. Firstly, it enables subtask dispersion reduction in DAG structures, whereas previous techniques have only been applicable to single-layer split-merge or fork-join systems. Secondly, the exact distributions of subtask processing times do not need to be known, so long as there is some means of generating samples. The technique is further extended to use a metric which trades off subtask dispersion and task response time.
Haughian G, Osman R, Knottenbelt WJ, 2016, Benchmarking replication in cassandra and MongoDB NoSQL datastores, 27th International Conference, DEXA 2016, Publisher: Springer, Pages: 152-166, ISSN: 0302-9743
The proliferation in Web 2.0 applications has increased the volume, velocity, and variety of data sources which have exceeded the limitations and expected use cases of traditional relational DBMSs. Cloud serving NoSQL data stores address these concerns and provide replication mechanisms to ensure fault tolerance, high availability, and improved scalability. In this paper, we empirically explore the impact of replication on the performance of Cassandra and MongoDB NoSQL datastores. We evaluate the impact of replication in comparison to non-replicated clusters of equal size hosted on a private cloud environment. Our benchmarking experiments are conducted for read and write heavy workloads subject to different access distributions and tunable consistency levels. Our results demonstrate that replication must be taken into consideration in empirical and modelling studies in order to achieve an accurate evaluation of the performance of these datastores.
This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.
Harrison PG, Patel NM, Knottenbelt, 2016, Energy–performance trade-offs via the EP-queue, ACM Transactions on Modeling and Performance Evaluation of Computing Systems, Vol: 1, ISSN: 2376-3647
We introduce the EP queue -- a significant generalization of the MB/G/1 queue that has state-dependent service time probability distributions and incorporates power-up for first arrivals and power-down for idle periods. We derive exact results for the busy-time and response-time distributions. From these, we derive power consumption metrics during nonidle periods and overall response time metrics, which together provide a single measure of the trade-off between energy and performance. We illustrate these trade-offs for some policies and show how numerical results can provide insights into system behavior. The EP queue has application to storage systems, especially hard disks, and other data-center components such as compute servers, networking, and even hyperconverged infrastructure.
Kelly J, Knottenbelt WJ, 2016, Does disaggregated electricity feedback reduce domestic electricity consumption? A systematic review of the literature, CoRR, Vol: abs/1605.00962
We examine 12 studies on the efficacy of disaggregated energy feedback. The average electricity reduction across these studies is 4.5%. However, 4.5% may be a positively-biased estimate of the savings achievable across the entire population because all 12 studies are likely to be prone to opt-in bias hence none test the effect of disaggregated feedback on the general population. Disaggregation may not be required to achieve these savings: Aggregate feedback alone drives 3% reductions; and the 4 studies which directly compared aggregate feedback against disaggregated feedback found that aggregate feedback is at least as effective as disaggregated feedback, possibly because web apps are viewed less often than in-home-displays (in the short-term, at least) and because some users do not trust fine-grained disaggregation (although this may be an issue with the specific user interface studied). Disaggregated electricity feedback may help a motivated sub-group of the population to save more energy but fine-grained disaggregation may not be necessary to achieve these energy savings. Disaggregation has many uses beyond those discussed in this paper but, on the specific question of promoting energy reduction in the general population, there is no robust evidence that current forms of disaggregated energy feedback are more effective than aggregate energy feedback. The effectiveness of disaggregated feedback may increase if the general population become more energy-conscious (e.g. if energy prices rise or concern about climate change deepens); or if users' trust in fine-grained disaggregation improves; or if innovative new approaches or alternative disaggregation strategies (e.g. disaggregating by behaviour rather than by appliance) out-perform existing feedback. We also discuss opportunities for new research into the effectiveness of disaggregated feedback.
Wu H, Knottenbelt WJ, Wolter K, et al., 2016, An Optimal Offloading Partitioning Algorithm in Mobile Cloud Computing., Publisher: Springer, Pages: 311-328
Pesu T, Knottenbelt WJ, 2015, Dynamic Subtask Dispersion Reduction in Heterogeneous Parallel Queueing Systems, Electronic Notes in Theoretical Computer Science, Vol: 318, Pages: 129-142, ISSN: 1571-0661
Fork-join and split-merge queueing systems are mathematical abstractions of parallel task processing systems in which entering tasks are split into N subtasks which are served by a set of heterogeneous servers. The original task is considered completed once all the subtasks associated with it have been serviced. Performance of split-merge and fork-join systems are often quantified with respect to two metrics: task response time and subtask dispersion. Recent research effort has been focused on ways to reduce subtask dispersion, or the product of task response time and subtask dispersion, by applying delays to selected subtasks. Such delays may be pre-computed statically, or varied dynamically. Dynamic in our context refers to the ability to vary the delay applied to a subtask according to the state of the system, at any time before the service of that subtask has begun. We assume that subtasks in service cannot be preempted. A key dynamic optimisation that benefits both metrics of interest is to remove delays on any subtask with a sibling that has already completed service. This paper incorporates such a policy into existing methods for computing optimal subtask delays in split-merge and fork-join systems. In the context of two case studies, we show that doing so affects the optimal delays computed, and leads to improved subtask dispersion values when compared with existing techniques. Indeed, in some cases, it turns out to be beneficial to initially postpone the processing of non-bottleneck subtasks until the bottleneck subtask has completed service.
Chen X, Rupprecht L, Osman R, et al., 2015, CloudScope: diagnosing and managing performance interference in multi-tenant clouds, 23rd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Publisher: IEEE, Pages: 164-173, ISSN: 1526-7539
Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%.
Parson O, Fisher G, Hersey A, et al., 2015, Dataport and NILMTK: A Building Data Set Designed for Non-intrusive Load Monitoring, 3rd IEEE Global Conference on Signal and Information Processing (GlobalSIP), Publisher: IEEE, Pages: 210-214
Chen X, Knottenbelt WJ, 2015, A performance tree-based monitoring platform for clouds, Pages: 97-98
Copyright © 2015 ACM. Cloud-based software systems are expected to deliver reli- able performance under dynamic workload while eficiently managing resources. Conventional monitoring frameworks provide limited support for exible and intuitive performance queries. In this paper, we present a prototype monitor- ing and control platform for clouds that is a better fit to the characteristics of cloud computing (e.g. extensible, user- defined, scalable). Service Level Objectives (SLOs) are ex- pressed graphically as Performance Trees, while violated SLOs trigger mitigating control actions.
Bradley J, Knottenbelt W, Thomas N, 2015, Preface, Electronic Notes in Theoretical Computer Science, Vol: 310, Pages: 1-3, ISSN: 1571-0661
Kelly J, Knottenbelt WJ, 2015, Neural NILM: Deep Neural Networks Applied to Energy Disaggregation., Publisher: ACM, Pages: 55-64
, 2015, An Optimal Offloading Partitioning Algorithm in Mobile Cloud Computing., CoRR, Vol: abs/1510.07986
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.