Imperial College London

ProfessorMichaelHuth

Faculty of EngineeringDepartment of Computing

Head of the Department of Computing
 
 
 
//

Contact

 

m.huth Website

 
 
//

Location

 

Huxley 566Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

143 results found

Xiong X, Wang Z, Knottenbelt W, Huth Met al., 2023, Demystifying Just-in-Time (JIT) liquidity attacks on Uniswap V3, 2023 5th Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS), Publisher: IEEE, ISSN: 2835-3021

Uniswap is currently the most liquid DecentralizedExchange (DEX) on Ethereum. In May 2021, it upgraded to the third protocol version named Uniswap V3. The key feature update is “concentrated liquidity”, which supports liquidity provision within custom price ranges. However, this design introduces a new type of Miner Extractable Value (MEV) source called Just-in-Time (JIT) liquidity attack, where the adversary mints and burns a liquidity position right before and after a sizable swap. We begin by formally defining the JIT liquidity attack and subsequently conduct empirical measurements on Ethereum. Over a span of 20 months, we identify 36,671 such attacks, which have collectively generated profits of 7,498 ETH. Our analysis suggests that the JIT liquidity attack essentially represents a whales’ game, predominantly controlled by a select few bots. The most active bot, identified as 0xa57...6CF, has managed to amass 92% of the total profit. Furthermore, we find that this attack strategy poses significant entry barriers, as it necessitates adversaries to provide liquidity that is, on average, 269 times greater than the swap volume. In addition, our findings reveal that the JIT liquidity attack exhibits relatively poor pr ofitability, with an average Return On Investment (ROI) of merely 0.007%. We alsofind this type of attack to be detrimental to existing Liquidity Providers (LPs) within the pool, as their shares of liquidity undergo an average dilution of 85%. On the contrary, this attack proves advantageous for liquidity takers, who secure execution prices that are, on average, 0.139% better than before. We further dissect the behaviors of the top MEV bots and evaluate theirstrategies through local simulation. Our observations reveal that the most active bot, 0xa57...6CF, conducted 27% of non-optimal attacks, thereby failing to capture at least 7,766 ETH (equivalent to 16.1M USD) of the potential attack profit.

Conference paper

Karame G, Huth M, Vishik C, 2020, An overview of blockchain science and engineering, ROYAL SOCIETY OPEN SCIENCE, Vol: 7, ISSN: 2054-5703

Journal article

Ah-Fat P, Huth M, 2020, Protecting Private Inputs: Bounded Distortion Guarantees With Randomised Approximations, Privacy Enhancing Technologies Symposium

Conference paper

Huth M, 2019, The merits of compositional abstraction: A case study in propositional logic, Lecture Notes in Computer Science, Publisher: Springer International Publishing, Pages: 297-309, ISBN: 9783030223472

We revisit a well-established and old topic in computational logic: algorithms – such as the one by Quine-McCluskey – that convert a formula of propositional logic into a semantically equivalent disjunctive normal form whose clauses are all prime implicants of that formula. This exercise in education is meant to honor Bernhard Steffen, who made important contributions in formal verification and its use of compositional abstraction, and who is a role model in transferring research insights into teaching addressed at students with varying skill levels. The algorithm we propose here is indeed compositional and can teach students about the value of compositional abstractions – making use of simple lattice-theoretic and semantic concepts.

Book chapter

Bao S, Cao Y, Lei A, Asuquo P, Cruickshank H, Sun Z, Huth Met al., 2019, Pseudonym Management Through Blockchain: Cost-Efficient Privacy Preservation on Intelligent Transportation Systems, IEEE ACCESS, Vol: 7, Pages: 80390-80403, ISSN: 2169-3536

Journal article

Cheung K, Huth M, Kirk L, Lundbaek L, Marques R, Petsche Jet al., 2019, Owner-centric sharing of physical resources, data, and data-driven insights in digital ecosystems, 24th ACM Symposium on Access Control Models and Technologies, Publisher: ACM, Pages: 73-81

We are living in an age in which digitization will connect more and more physical assets with IT systems and where IoT endpoints will generate a wealth of valuable data. Companies, individual users, and organizations alike therefore have the need to control their own physical or non-physical assets and data sources. At the same time, they recognize the need for, and opportunity to, share access to such data and digitized physical assets. This paper sets out our technology vision for such sharing ecosystems, reports initial work in that direction, identifies challenges for realizing this vision, and seeks feedback and collaboration from the academic access-control community in that R&D space.

Conference paper

Ah-Fat P, Huth M, 2019, Optimal accuracy privacy trade-off for secure computations, IEEE Transactions on Information Theory, Vol: 65, Pages: 3165-3182, ISSN: 0018-9448

The purpose of Secure Multi-Party Computation is to enable protocol participants to compute a public function of their private inputs while keeping their inputs secret, without resorting to any trusted third party. However, opening the public output of such computations inevitably reveals some information about the private inputs. We propose a measure generalising both Rényi entropy and g-entropy so as to quantify this information leakage. In order to control and restrain such information flows, we introduce the notion of function substitution which replaces the computation of a function that reveals sensitive information with that of an approximate function.We exhibit theoretical bounds for the privacy gains that this approach provides and experimentally show that this enhances the confidentiality of the inputs while controlling the distortion of computed output values. Finally, we investigate the inherent compromise between accuracy of computation and privacy of inputs and we demonstrate how to realise such optimal trade-offs.

Journal article

Nicolescu R, Huth MRA, Radanliev P, De Roure Det al., 2018, Mapping the values of IoT, Journal of Information Technology, Vol: 33, Pages: 345-360, ISSN: 0268-3962

We investigate the emerging meanings of “value” associated with the Internet of Things. Given the current political economy, we argue that the multiple meanings of “value” cannot be reduced to a single domain or discipline, but rather they are invariably articulated at the juxtaposition of three domains: social, economic, and technical. We analyse each of these domains and present domain challenges and cross-domain implications – drawing from an interdisciplinary literature review and gap analysis across sources from academia, business, and governments. We propose a functional model that aggregates these findings into a value-driven logic of the emerging global political economy enabled by digital technology in general and IoT in particular. These conceptual contributions highlight the critical need for an interdisciplinary understanding of the meaning of “value”, so that IoT services and products will create and sustain such concurrent meanings during their entire lifecycle, from design to consumption and retirement or recycling.

Journal article

Radanliev P, De Roure DC, Nicolescu R, Huth M, Montalvo RM, Cannady S, Burnap Pet al., 2018, Future developments in cyber risk assessment for the internet of things, Computers in Industry, Vol: 102, Pages: 14-22, ISSN: 0166-3615

This article is focused on the economic impact assessment of Internet of Things (IoT) and its associated cyber risks vectors and vertices – a reinterpretation of IoT verticals. We adapt to IoT both the Cyber Value at Risk model, a well-established model for measuring the maximum possible loss over a given time period, and the MicroMort model, a widely used model for predicting uncertainty through units of mortality risk. The resulting new IoT MicroMort for calculating IoT risk is tested and validated with real data from the BullGuard's IoT Scanner (over 310,000 scans) and the Garner report on IoT connected devices. Two calculations are developed, the current state of IoT cyber risk and the future forecasts of IoT cyber risk. Our work therefore advances the efforts of integrating cyber risk impact assessments and offer a better understanding of economic impact assessment for IoT cyber risk.

Journal article

Lundbaek L, Beutel DJ, Huth MRA, Jackson S, Kirk L, Steiner Ret al., 2018, Proof of Kernel Work: a democratic low-energy consensus for distributed access-control protocols, Royal Society Open Science, Vol: 5, ISSN: 2054-5703

We adjust the Proof of Work (PoW) consensus mechanism used in Bitcoin and Ethereum so that we can build on its strength while also addressing, in part, some of its perceived weaknesses. Notably, our work is motivated by the high energy consumption for mining PoW, and we want to restrict the use of PoW to a configurable, expected size of nodes, as a function of the local blockchain state. The approach we develop for this rests on three pillars: (i) Proof of Kernel Work (PoKW), a means of dynamically reducing the set of nodes that can participate in the solving of PoW puzzles such that an adversary cannot increase his attack surface because of such a reduction; (ii) Practical Adaptation of Existing Technology, a realization of this PoW reduction through an adaptation of existing blockchain and enterprise technology stacks; and (iii) Machine Learning for Adaptive System Resiliency, the use of techniques from artificial intelligence to make our approach adaptive to system, network and attack dynamics. We develop here, in detail, the first pillar and illustrate the second pillar through a real use case, a pilot project done with Porsche on controlling permissions to vehicle and data log accesses. We also discuss pertinent attack vectors for PoKW consensus and their mitigation. Moreover, we sketch how our approach may lead to more democratic PoKW-based blockchain systems for public networks that may inherit the resilience of blockchains based on PoW.

Journal article

Mistry M, Callia D'Iddio A, Huth MRA, Misener Ret al., 2018, Satisfiability modulo theories for process systems engineering, Computers and Chemical Engineering, Vol: 113, Pages: 98-114, ISSN: 1873-4375

Process systems engineers have long recognized the importance of both logic and optimization for automated decision-making. But modern challenges in process systems engineering could strongly benefit from methodological contributions in computer science. In particular, we propose satisfiability modulo theories (SMT) for process systems engineering applications. We motivate SMT using a series of test beds and show the applicability of SMT algorithms and implementations on (i) two-dimensional bin packing, (ii) model explainers, and (iii) MINLP solvers.

Journal article

Taylor P, Allpress S, Carr M, Lupu E, Norton J, Smith L, Blackstock J, Boyes H, Hudson-Smith A, Brass I, Chizari H, Cooper R, Coultron P, Craggs B, Davies N, De Roure D, Elsden M, Huth M, Lindley J, Maple C, Mittelstadt B, Nicolescu R, Nurse J, Procter R, Radanliev P, Rashid A, Sgandurra D, Skatova A, Taddeo M, Tanczer L, Vieira-Steiner R, Watson JDM, Wachter S, Wakenshaw S, Carvalho G, Thompson RJ, Westbury PSet al., 2018, Internet of Things: Realising the Potential of a Trusted Smart World, Internet of Things: Realising the Potential of a Trusted Smart World, London, Publisher: Royal Academy of Engineering: London

This report examines the policy challenges for the Internet of Things (IoT), and raises a broad range of issues that need to be considered if policy is to be effective and the potential economic value of IoT is harnessed. It builds on the Blackett review, The Internet of Things: making the most of the second digital revolution, adding detailed knowledge based on research from the PETRAS Cybersecurity of the Internet of Things Research Hub and input from Fellows of the Royal Academy of Engineering. The report targets government policymakers, regulators, standards bodies and national funding bodies, and will also be of interest to suppliers and adopters of IoT products and services.

Report

Balduccini M, Griffor E, Huth MRA, Vishik C, Bruns M, Wollman Det al., 2018, Ontology-based reasoning about the trustworthiness of cyber-physical systems, Living in the Internet of Things, Publisher: Institution of Engineering and Technology

It has been challenging for the technical and regulatory communities to formulate requirements for trustworthiness of the cyber-physical systems (CPS) due to the complexity of the issues associated with their design, deployment, and operations. The US National Institute of Standards and Technology (NIST), through a public working group, has released a CPS Framework that adopts a broad and integrated view of CPS and positions trustworthiness among other aspects of CPS. This paper takes the model created by the CPS Framework and its further developments one step further, by applying ontological approaches and reasoning techniques in order to achieve greater understanding of CPS. The example analyzed in the paper demonstrates the enrichment of the original CPS model obtained through ontology and reasoning and its ability to deliver additional insights to the developers and operators of CPS.

Conference paper

Balduccini M, Griffor E, Huth M, Vishik C, Burns M, Wollman Det al., 2018, Reasoning about Smart City, 4th IEEE International Conference on Smart Computing (SMARTCOMP), Publisher: IEEE, Pages: 381-386

Conference paper

Lundbaek L, Callia d'Iddio A, Huth MRA, 2017, Centrally governed blockchains: optimizing security, cost, and availability, Models, Algorithms, Logics and Tools, Publisher: Springer, Pages: 578-599

We propose the formal study of blockchains that are owned and controlled by organizations and that neither create cryptocurrencies nor provide incentives to solvers of cryptographic puzzles. We view such approaches as frameworks in which system parts, such as the cryptographic puzzle, may be instantiated with different technology. Owners of such a blockchain procure puzzle solvers as resources they control, and use a mathematical model to compute optimal parameters for the cryptographic puzzle mechanism or other parts of the blockchain. We illustrate this approach with a use case in which blockchains record hashes of financial process transactions to increase their trustworthiness and that of their audits.For Proof of Work as cryptographic puzzle, we develop a detailed mathematical model to derive MINLP optimization problems for computing optimal Proof of Work configuration parameters that trade off potentially conflicting aspects such as availability, resiliency, security, and cost in this governed setting. We demonstrate the utility of such a mining calculus by applying it on some instances of this problem.We hope that our work may facilitate the creation of domain-specific blockchains for a wide range of applications such as trustworthy information in Internet of Things systems and bespoke improvements of legacy financial services.

Book chapter

Ah-Fat P, Huth MRA, 2017, Secure multi-party computation: information flow of outputs and game theory, 6th International Conference on Principles of Security and Trust (POST), Publisher: Springer Verlag, Pages: 71-92, ISSN: 0302-9743

Secure multiparty computation enables protocol participantsto compute the output of a public function of their private inputs whilstprotecting the confidentiality of their inputs. But such an output, as afunction of its inputs, inevitably leaks some information about input val-ues regardless of the protocol used to compute it. We introduce founda-tions for quantifying and understanding how such leakage may influenceinput behaviour of deceitful protocol participants as well as that of par-ticipants they target. Our model captures the beliefs and knowledge thatparticipants have about what input values other participants may choose.In this model, measures of information flow that may arise between pro-tocol participants are introduced, formally investigated, and experimen-tally evaluated. These information-theoretic measures not only suggestadvantageous input behaviour to deceitful participants for optimal up-dates of their beliefs about chosen inputs of targeted participants. Theyalso allow targets to quantify the information-flow risk of their inputchoices. We show that this approach supports a game-theoretic formula-tion in which deceitful attackers wish to maximise the information thatthey gain on inputs of targets once the computation output is known,whereas the targets wish to protect the privacy of their inputs.

Conference paper

Huth MRA, Ah-Fat P, 2016, Partial solvers for parity games: effective polynomial-time composition, Electronic Proceedings in Theoretical Computer Science, EPTCS, Vol: 226, Pages: 1-15, ISSN: 2075-2180

Partial methods play an important role in formal methods and beyond. Recently such methods were developed for parity games, where polynomial-time partial solvers decide the winners of a subset of nodes. We investigate here how effective polynomial-time partial solvers can be by studying interactions of partial solvers based on generic composition patterns that preserve polynomial-time computability. We show that use of such composition patterns discovers new partial solvers - including those that merge node sets that have the same but unknown winner - by studying games that composed partial solvers can neither solve nor simplify. We experimentally validate that this data-driven approach to refinement leads to polynomial-time partial solvers that can solve all standard benchmarks of structured games. For one of these polynomial-time partial solvers not even a sole random game from a few billion random games of varying configuration was found that it won't solve completely.

Journal article

Huth MRA, Beaumont P, Evans N, Plant Tet al., 2016, Bounded analysis of constrained dynamical systems: a case study in nuclear arms control, INMM Annual Conference 2016, Publisher: Institute of Nuclear Materials Management

We introduce a simple dynamical system that describes key features of a bilateral nuclear armscontrol regime. The evolution of each party's beliefs and declarations under the regime are represented, andthe e ects of inspection processes are captured. Bounded analysis of this model allows us to explore { withina nite horizon { the consequences of changes to the rules of the arms control process and to the strategiesof each party, bounded scope invariants for variables of interest, and dynamics for initial states containingstrict uncertainty. Together these would potentially enable a decision support system to consider casesof interest irrespective of unknowns. We realize such abilities by building a Python package that drawson the capabilities of a Satis ability Modulo Theory (SMT) solver to explore particular scenarios and tooptimize measures of interest { such as the belief of one nation in the statements made by another, orthe timing of an unscheduled inspection such that it has maximum value. We show that these capabilitiescan in principle support the design or assessment of future bilateral arms control instruments by applyingthem to a set of representative and relevant test scenarios with realistic nite horizons.

Conference paper

Huth MRA, Beaumont, Day E, Evans N, Haworth S, Plant T, Roberts Cet al., 2016, An in-depth case study: modelling an information barrier with Bayesian Belief Networks, INMM Annual Conference, Publisher: Institute of Nuclear Materials Management

We present in detail a quantitative Bayesian Belief Network (BBN) model of the use of aninformation barrier system during a nuclear arms control inspection, and an analysis of this model usingthe capabilities of a Satis ability Modulo Theory (SMT) solver. Arms control veri cation processes do notin practice allow the parties involved to gather complete information about each other, and therefore anymodel we use must be able to cope with the limited information, subjective assessment and uncertaintyin this domain. We have previously extended BBNs to allow this kind of uncertainty in parameter values(such as probabilities) to be re ected; theseconstrainedBBNs (cBBNs) o er the potential for more robustmodelling, which in that study we demonstrated with a simple information barrier model. We now presenta much more detailed model of a similar veri cation process, based on the technical capabilities anddeployment concept of the UK-Norway Initiative (UKNI) Information Barrier system, demonstrating thescalability of our previously-presented approach. We discuss facets of the model itself in detail, beforeanalysing pertinent questions of interest to give examples of the power of this approach.

Conference paper

Huth MRA, Beaumont P, Evans N, Plant Tet al., 2016, Confidence analysis for nuclear arms control: SMT abstractions of Game Theoretic Models, INMM Annual Conference 2016, Publisher: Institute of Nuclear Materials Management

We consider the use of game theory in an arms control inspection planning scenario. Speci callywe develop a case study that games the number of inspections available against an ideal treaty length.Normal game theoretic techniques struggle to justify pay-o values to use for certain events, limiting theusefulness of such techniques. In order to improve the value of using game theory for decision making, weintroduce a methodology for under-specifying the game theoretic models through a mixture of regressiontechniques and Satis ability Modulo Theory (SMT) constraint solving programs. Our approach allows auser to under-specify pay-o s in games, and to check, in a manner akin to robust optimisation, for howsuch under-speci cations a ect the `solution' of a game. We analyse the Nash equilibria and the mixedstrategy sets that would lead to such equilibria - and explore how to maximise expected pay-o s and useof individual pure strategies for all possible values of an under-speci cation. Through this approach, wegain an insight into how - irrespective of uncertainty - we can still compute with game theoretic models,and present the types and kinds of analysis we can run that bene t from this uncertainty.

Conference paper

Beaumont P, Evans N, Huth MRA, Plant Tet al., 2016, Confidence analysis for nuclear arms control: SMT abstractions of Bayesian Belief Networks, 20th European Symposium on Research in Computer Security (ESORICS) 2015, ISSN: 0302-9743

How to reduce, in principle, arms in a verifiable manner that is trusted by two or more parties is a hard but important prob- lem. Nations and organisations that wish to engage in such arms control verification activities need to be able to design procedures and control mechanisms that capture their trust assumptions and let them compute pertinent degrees of belief. Crucially, they also will need methods for reliably assessing their confidence in such computed degrees of belief in situations with little or no contextual data. We model an arms control verification scenario with what we call constrained Bayesian Belief Net- works (cBBN). A cBBN represents a set of Bayesian Belief Networks by symbolically expressing uncertainty about probabilities and scenario- specific constraints that are not represented by a BBN. We show that this abstraction of BBNs can mitigate well against the lack of prior data. Specifically, we describe how cBBNs have faithful representations within a Satisfiability Modulo Theory (SMT) solver, and that these representa- tions open up new ways of automatically assessing the confidence that we may have in the degrees of belief represented by cBBNs. Furthermore, we show how to perform symbolic sensitivity analyses of cBBNs, and how to compute global optima of under-specified probabilities of particular interest to decision making. SMT solving also enables us to assess the relative confidence we have in two cBBNs of the same scenario, where these models may share some information but express some aspects of the scenario at different levels of abstraction.

Conference paper

Ah-Fat PWFK, Huth M, 2016, Effective partial solvers for parity games, Departmental Technical Report: 16/1, Publisher: Department of Computing, Imperial College London, 16/1

Partial methods play an important role in formal methods and beyond. Recently suchmethods were developed for parity games, where polynomial-time partial solvers decide thewinners of a subset of nodes. We investigate here how effective polynomial-time partial solverscan be in principle by studying polynomial-time interactions of partial solvers. Concretely,we propose simple, generic composition patterns for partial solvers that preserve polynomialtimecomputability. We show that an implementation of this semantic framework manuallydiscovers new partial solvers – including those that merge node sets that have the same butunknown winner – by studying games that composed partial solvers can neither solve norsimplify. We experimentally validate that this data-driven approach to refinement leads topolynomial-time partial solvers that can solve all standard benchmarks of structured games.For one of these polynomial-time partial solvers, we were unable to find even a sole randomgame that it won’t solve completely, although we generated a few billion random games ofvarying configurations to that end. However, the work presented here does not yet offer anydeeper characterisations of which games are completely solved by such partial solvers.

Report

Huth MRA, Huth, Piterman, Kuo Jet al., 2015, Static Analysis of Parity Games: Alternating Reachability Under Parity, Semantics, Logics, and Calculi Essays Dedicated to Hanne Riis Nielson and Flemming Nielson on the Occasion of Their 60th Birthdays, Publisher: Springer, Pages: 159-177, ISSN: 0302-9743

It is well understood that solving parity games is equivalent,up to polynomial time, to model checking of the modal mu-calculus. Itis a long-standing open problem whether solving parity games (or modelchecking modal mu-calculus formulas) can be done in polynomial time.A recent approach to studying this problem has been the design of partialsolvers, algorithms that run in polynomial time and that may only solveparts of a parity game. Although it was shown that such partial solverscan completely solve many practical benchmarks, the design of such partialsolvers was somewhat ad hoc, limiting a deeper understanding ofthe potential of that approach. We here mean to provide such robustfoundations for deeper analysis through a new form of game, alternatingreachability under parity. We prove the determinacy of these games anduse this determinacy to define, for each player, a monotone fixed pointover an ordered domain of height linear in the size of the parity gamesuch that all nodes in its greatest fixed point are won by said player inthe parity game. We show, through theoretical and experimental work,that such greatest fixed points and their computation leads to partialsolvers that run in polynomial time. These partial solvers are based onestablished principles of static analysis and are more effective than partialsolvers studied in extant work.

Conference paper

Huth MRA, Piterman N, Kuo JH-P, 2015, The Rabin Index of Parity Games: Its Complexity and Approximation, Information and Computation, Vol: 245, Pages: 36-53, ISSN: 1090-2651

We study the descriptive complexity of parity games by taking into account the coloring of their game graphs whilst ignoring their ownership structure. Colorings of game graphs areidentified if they determine the same winning regions and strategies, for *all* ownership structures of nodes. The Rabin index of a parity game is the minimum of the maximal color taken over all equivalent coloring functions. We show that deciding whether the Rabin index is at least k is in P for k=1$but NP-hard for all *fixed* k >= 2. We present anEXPTIME algorithm that computes the Rabin index by simplifying its input coloring function. When replacing simple cycle with cycle detection in that algorithm, its output over-approximates the Rabin index in polynomial time. We evaluate this efficient algorithm as a preprocessor of solvers in detailed experiments: for Zielonka's solver on random and structured parity games and for *partial* solver psolB on random games.

Journal article

Huth M, Kuo JH-P, 2015, Static analysis of parity games: alternating reachability under parity, Departmental Technical Report: 15/4, Publisher: Department of Computing, Imperial College London, 15/4

It is well understood that solving parity games is equivalent, up to polynomial time, tomodel checking of the modal mu-calculus. It is a long-standing open problem whether solvingparity games (or model checking modal mu-calculus formulas) can be done in polynomial time.A recent approach to studying this problem has been the design of partial solvers, algorithmsthat run in polynomial time and that may only solve parts of a parity game. Although it wasshown that such partial solvers can completely solve many practical benchmarks, the design ofsuch partial solvers was somewhat ad hoc, limiting a deeper understanding of the potential ofthat approach. We here mean to provide such robust foundations for deeper analysis througha new form of game, alternating reachability under parity. We prove the determinacy of thesegames and use this determinacy to define, for each player, a monotone fixed point over anordered domain of height linear in the size of the parity game such that all nodes in itsgreatest fixed point are won by said player in the parity game. We show, through theoreticaland experimental work, that such greatest fixed points and their computation leads to partialsolvers that run in polynomial time. These partial solvers are based on established principlesof static analysis and are more effective than partial solvers studied in extant work.

Report

Huth MR, Kuo JH-P, 2014, On Designing Usable Policy Languages for Declarative Trust Aggregation, Human Aspects of Information Security, Privacy, and Trust (HAS 2014), Publisher: Springer

Conference paper

Huth MR, Kuo JH-P, 2014, PEALT: An Automated Reasoning Tool for Numerical Aggregation of Trust Evidence, Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2014), Publisher: Springer, Pages: 109-123, ISSN: 0302-9743

Conference paper

Crampton J, Huth M, Kuo JH-P, 2014, Authorized workflow schemas: deciding realizability through LTL(F) model checking, INTERNATIONAL JOURNAL ON SOFTWARE TOOLS FOR TECHNOLOGY TRANSFER, Vol: 16, Pages: 31-48, ISSN: 1433-2779

Journal article

Huth M, Kuo JH-P, 2014, Quantitative threat analysis via a logical service, Departmental Technical Report: 14/10, Publisher: Department of Computing, Imperial College London, 14/10

It is increasingly important to analyze system security quantitatively using concepts suchas trust, reputation, cost, and risk. This requires a thorough understanding of how suchconcepts should interact so that we can validate the assessment of threats, the choice ofadopted risk management, etc.. To this end, we propose a declarative language Peal+ inwhich the interaction of such concepts can be rigorously described and analyzed. Peal+ hasbeen implemented in PEALT using the SMT solver Z3 as analysis back-end. PEALT's codegenerators target complex back-ends and evolve with optimizations or new back-ends. Thuswe can neither trust the tool chain nor feasibly prove correctness of all involved artefacts.We eliminate the need to trust that tool chain by independently certifying scenarios foundby back-ends in a manner agnostic of code generation and choice of back-end. This scenariovalidation is compositional, courtesy of Kleene's 3-valued logic and potential re nement ofscenarios. We prove the correctness of this validation, discuss how PEALT presents scenariosto further users' understanding, and demonstrate the utility of this approach by showing howit can express attack-countermeasure trees so that the interaction of attack success probability,attack cost, and attack impact can be analyzed.

Report

Huth M, Kuo JHP, 2013, Towards verifiable trust management for software execution (extended abstract), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 7904 LNCS, Pages: 275-276, ISSN: 0302-9743

In the near future, computing devices will be present in most artefacts, will considerably outnumber the number of people on this planet, and will host software the executes in a potentially hostile and only partially known environment. This suggests the need for bringing trust management into running software itself, so that executing software be guard-railed by policies that reflect risk postures deemed to be appropriate for software and its deployment context. We sketch here an implementation of a prototype that realizes, in part, such a vision. © 2013 Springer-Verlag.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00333344&limit=30&person=true