Imperial College London

ProfessorAlessioLomuscio

Faculty of EngineeringDepartment of Computing

Professor of Safe Artificial Intelligence
 
 
 
//

Contact

 

+44 (0)20 7594 8414a.lomuscio Website

 
 
//

Location

 

Imperial-XTranslation & Innovation Hub BuildingWhite City Campus

//

Summary

 

Publications

Publication Type
Year
to

232 results found

Leofante F, Lomuscio A, 2023, Robust explanations for human-neural multi-agent systems with formal verification, The 20th European Conference on Multi-Agent Systems (EUMAS 2023), Publisher: Springer, Pages: 244-262, ISSN: 1611-3349

The quality of explanations in human-agent interactions isfundamental to the development of trustworthy AI systems. In this paper we study the problem of generating robust contrastive explanations for human-neural multi-agent systems and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations equipped with formal robustness certificates. We present an implementation and evaluate the effectiveness of the approach on two case studies involving neural agents trained on credit scoring and traffic sign recognition tasks.

Conference paper

Kouvaros P, Leofante F, Edwards B, Chung C, Margineantu D, Lomuscio Aet al., 2023, Verification of semantic key point detection for aircraft pose estimation, The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 757-762, ISSN: 2334-1033

We analyse Semantic Segmentation Neural Networks running on an autonomous aircraft to estimate its pose during landing. We show that automated reasoning techniques from neural network verification can be used to analyse the conditions under which the networks can operate safely, thus providing enhanced assurance guarantees on the behaviour of the over-all pose estimation systems.

Conference paper

Leofante F, Henriksen P, Lomuscio A, 2023, Verification-friendly networks: the case for parametric ReLUs, International Joint Conference on Neural Networks (IJCNN 2023), Publisher: IEEE, Pages: 1-9

It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While progress has been made in the area of verification of neural networks, present techniques still do not scale to large ReLU-based neural networks used in many applications. In this paper we show that considerable progress can be made by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with VeriNet, a SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approachimproves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks.

Conference paper

Senanayake R, Fremont DJ, Kochenderfer MJ, Lomuscio AR, Margineantu D, Ong CSet al., 2023, Guest Editorial: Special issue on robust machine learning, MACHINE LEARNING, Vol: 112, Pages: 2787-2789, ISSN: 0885-6125

Journal article

Henriksen P, Lomuscio A, 2023, Robust training of neural networks against bias field perturbations, AAAI Conference on Artificial Intelligence (AAAI23), Publisher: AAAI, Pages: 14865-14873, ISSN: 2374-3468

Robust training of neural networks has so far been developed in the context of white noise perturbations to limit their susceptibility to adversarial attacks. However, in applications neural networks need to be robust to a wider range of input perturbations, including contrast, brightness, and beyond. We here introduce the problem of training neural networks such that they are robust against a class of smooth intensity perturbations modelled by bias fields. We first develop an approach towards this goal based on a State-of-the-Art (SoA) robust training method utilising Interval Bound Propagation (IBP). We analyse the resulting algorithm and observe that IBP often produces very loose bounds for bias field perturbations, which may be detrimental to training. We propose an alternative approach based on Symbolic Interval Propagation (SIP), which usually results in significantly tighter bounds than IBP. We present ROBNET, a tool implementing these approaches for bias field robust training. In experiments networks trained with the SIP-based approach achieved up to 31% higher certified robustness while also maintaining a better accuracy than networks trained with the IBP approach.

Conference paper

Lan J, Brueckner B, Lomuscio A, 2023, A Semidefinite Relaxation based Branch-and-Bound Method for Tight Neural Network Verification, AAAI Conference on Artificial Intelligence (AAAI23), Publisher: AAAI, Pages: 14946-14954, ISSN: 2374-3468

We introduce a novel method based on semidefinite program (SDP) for the tight and efficient verification of neural networks. The proposed SDP relaxation advances the present SoA in SDP-based neural network verification by adding a set of linear constraints based on eigenvectors. We extend this novel SDP relaxation by combining it with a branch-and-bound method that can provably close the relaxation gap up to zero. We show formally that the proposed approach leads to a provably tighter solution than the present SoA. We report experimental results showing that the proposed method outperforms baselines in terms of verified accuracy while retaining an acceptable computational overhead.

Conference paper

Lan J, Zheng Y, Lomuscio A, 2023, Iteratively enhanced semidefinite relaxations for efficient neural network Verification, 37th AAAI Conference on Artificial Intelligence (AAAI23), Publisher: AAAI, Pages: 14937-14945, ISSN: 2374-3468

We propose an enhanced semidefinite program (SDP) relaxation to enable the tight and efficient verification of neural networks (NNs). The tightness improvement is achieved by introducing a nonlinear constraint to existing SDP relaxations previously proposed for NN verification. The efficiency of theproposal stems from the iterative nature of the proposed algorithm in that it solves the resulting non-convex SDP by recursively solving auxiliary convex layer-based SDP problems. We show formally that the the solution generated by our algorithm is tighter than state-of-the-art SDP-based solutions for the problem. We also show that the solution sequence converges to the optimal solution of the non-convex enhanced SDP relaxation. The experimental results on standard benchmarks in the area show that our algorithm achieves the state-of-the-art performance whilst maintaining an acceptable computational cost.

Conference paper

Leofante F, Lomuscio A, 2023, Towards robust contrastive explanations for human-neural multi-agent systems, International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), Publisher: ACM, Pages: 2343-2345

Generating explanations of high quality is fundamental to the development of trustworthy human-AI interactions. We here study the problem of generating contrastive explanations with formal robustness guarantees. We formalise a new notion of robustness and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations augmented with provablerobustness certificates. We present an implementation and evaluate the utility of the approach on two case studies concerning neural agents trainedon credit scoring and image classification tasks.

Conference paper

, 2022, Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, SAC '22: The 37th ACM/SIGAPP Symposium on Applied Computing, Publisher: ACM

Conference paper

Akintunde ME, Botoeva E, Kouvaros P, Lomuscio Aet al., 2022, Formal verification of neural agents in non-deterministic environments, Autonomous Agents and Multi-Agent Systems, Vol: 36, ISSN: 1387-2532

We introduce a model for agent-environment systems where the agents are implemented via feed-forward ReLU neural networks and the environment is non-deterministic. We study the verification problem of such systems against CTL properties. We show that verifying these systems against reachability properties is undecidable. We introduce a bounded fragment of CTL, show its usefulness in identifying shallow bugs in the system, and prove that the verification problem against specifications in bounded CTL is in coNEXPTIME and PSPACE-hard. We introduce sequential and parallel algorithms for MILP-based verification of agent-environment systems, present an implementation, and report the experimental results obtained against a variant of the VerticalCAS use-case and the frozen lake scenario.

Journal article

Henriksen P, Leofante F, Lomuscio A, 2022, Repairing misclassifications in neural networks using limited data, SAC '22, Pages: 1031-1038

We present a novel and computationally efficient method for repairing a feed-forward neural network with respect to a finite set of inputs that are misclassified. The method assumes no access to the training set. We present a formal characterisation for repairing the neural network and study its resulting properties in terms of soundness and minimality. We introduce a gradient-based algorithm that performs localised modifications to the network's weights such that misclassifications are repaired while marginally affecting network accuracy on correctly classified inputs. We introduce an implementation, I-REPAIR, and show it is able to repair neural networks while reducing accuracy drops by up to 90% when compared to other state-of-the-art approaches for repair.

Conference paper

Lomuscio A, Pirovano E, 2022, A counter abstraction technique for verifying properties of probabilistic swarm systems, Artificial Intelligence, Vol: 305, Pages: 103666-103666, ISSN: 0004-3702

We introduce a semantics for reasoning about probabilistic multi-agent systems in which the number of participants is not known at design-time. We define the parameterised model checking problem against PLTL specifications for this semantics, and observe that this is undecidable in general. Nonetheless, we develop a partial decision procedure for it based on counter abstraction. We prove the correctness of this procedure, and present an implementation of it. We then use our implementation to verify a number of example scenarios from swarm robotics and other settings.

Journal article

Belardinelli F, Knight S, Lomuscio A, Maubert M, Murano A, Rubin Set al., 2022, Reasoning about agents that may know other agents’ strategies, 30th International Joint Conference on Artificial Intelligence (IJCAI21), Publisher: IJCAI, Pages: 1787-1793, ISSN: 1045-0823

We study the semantics of knowledge in strategic reasoning. Most existing works either implicitly assume that agents do not know one another’s strategies, or that all strategies are known to all; and some works present inconsistent mixes of both features. We put forward a novel semantics for Strategy Logic with Knowledge that cleanly models whose strategies each agent knows. We study how adopting this semantics impacts agents’ knowledge and strategic ability, as well as the complexity of the model-checking problem.

Conference paper

Belardinelli F, Lomuscio A, Malvone V, Yu Eet al., 2022, Approximating Perfect Recall when Model Checking Strategic Abilities: Theory and Applications, JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, Vol: 73, Pages: 897-932, ISSN: 1076-9757

Journal article

Lan J, Zheng Y, Lomuscio A, 2022, Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations, 36th AAAI Conference on Artificial Intelligence / 34th Conference on Innovative Applications of Artificial Intelligence / 12th Symposium on Educational Advances in Artificial Intelligence, Publisher: ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, Pages: 7272-7280, ISSN: 2159-5399

Conference paper

Kouvaros P, Kyono T, Leofante F, Lomuscio A, Margineantu D, Osipychev D, Zheng Yet al., 2021, Formal analysis of neural network-based systems in the aircraft domain, International Symposium on Formal Methods, Publisher: Springer International Publishing, Pages: 730-740, ISSN: 0302-9743

Neural networks are being increasingly used for efficient decision making in the aircraft domain. Given the safety-critical nature of the applications involved, stringent safety requirements must be met by these networks. In this work we present a formal study of two neural network-based systems developed by Boeing. The Venus verifier is used to analyse the conditions under which these systems can operate safely, or generate counterexamples that show when safety cannot be guaranteed. Our results confirm the applicability of formal verification to the settings considered.

Conference paper

Henriksen P, Hammernik K, Rueckert D, Lomuscio Aet al., 2021, Bias Field Robustness Verification of Large Neural Image Classifiers, British Machine Vision Conference (BMVC21)

Conference paper

Batten B, Kouvaros P, Lomuscio A, Zheng Yet al., 2021, Efficient neural network verification via layer-based semidefinite relaxations and linear cuts, International Joint Conference on Artificial Intelligence (IJAC 2021), Publisher: IJCAI, Pages: 2184-2190

We introduce an efficient and tight layer-based semidefinite relaxation for verifying local robust-ness of neural networks. The improved tightness is the result of the combination between semidefinite relaxations and linear cuts. We obtain a computationally efficient method by decomposing the semidefinite formulation into layer wise constraints. By leveraging on chordal graph decompositions, we show that the formulation here presented is provably tighter than current approaches. Experiments on a set of benchmark networks show that the approach here proposed enables the verification of more instances compared to other relaxation methods. The results also demonstrate that the SDP relaxation here proposed is one order of magnitude faster than previous SDP methods.

Conference paper

Lomuscio A, 2021, Towards verifying neural autonomous systems

In this talk I will offer a personal perspective on the increasing challenges and the correspondingly more powerful solutions being developed in the area of verification of autonomous system. I will ground the presentation on the work carried out with several colleagues over the years in the Verification of Autonomous Systems at Imperial College London [32].

Conference paper

Henriksen P, Lomuscio A, 2021, DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis, Pages: 2549-2555, ISSN: 1045-0823

We propose a novel, complete algorithm for the verification and analysis of feed-forward, ReLU-based neural networks. The algorithm, based on symbolic interval propagation, introduces a new method for determining split-nodes which evaluates the indirect effect that splitting has on the relaxations of successor nodes. We combine this with a new efficient linear-programming encoding of the splitting constraints to further improve the algorithm's performance. The resulting implementation, DEEPSPLIT, achieved speedups of around 1-2 orders of magnitude and 21-34% fewer timeouts when compared to the current SoA toolkits.

Conference paper

Aminof B, De Giacomo G, Lomuscio A, Murano A, Rubin Set al., 2021, Synthesizing Best-effort Strategies under Multiple Environment Specifications, Pages: 42-51

We formally introduce and solve the synthesis problem for LTL goals in the case of multiple, even contradicting, assumptions about the environment. Our solution concept is based on “best-effort strategies” which are agent plans that, for each of the environment specifications individually, achieve the agent goal against a maximal set of environments satisfying that specification. By means of a novel automata theoretic characterization we demonstrate that this best-effort synthesis for multiple environments is 2EXPTIME-complete, i.e., no harder than plain LTL synthesis. We study an important case in which the environment specifications are increasingly indeterminate, and show that as in the case of a single environment, best-effort strategies always exist for this setting. Moreover, we show that in this setting the set of solutions are exactly the strategies formed as follows: amongst the best-effort agent strategies for ϕ under the environment specification E1, find those that do a best-effort for ϕ under (the more indeterminate) environment specification E2, and amongst those find those that do a best-effort for ϕ under the environment specification E3, etc.

Conference paper

Kouvaros P, Lomuscio A, 2021, Towards Scalable Complete Verification of ReLU Neural Networks via Dependency-based Branching, Pages: 2643-2650, ISSN: 1045-0823

We introduce an efficient method for the complete verification of ReLU-based feed-forward neural networks. The method implements branching on the ReLU states on the basis of a notion of dependency between the nodes. This results in dividing the original verification problem into a set of sub-problems whose MILP formulations require fewer integrality constraints. We evaluate the method on all of the ReLU-based fully connected networks from the first competition for neural network verification. The experimental results obtained show 145% performance gains over the present state-of-the-art in complete verification.

Conference paper

Endriss U, Nowé A, Dignum F, Lomuscio Aet al., 2021, Welcome message from the chairs, ISSN: 1548-8403

Conference paper

Hashemi V, Kouvaros P, Lomuscio A, 2021, OSIP: Tightened Bound Propagation for the Verification of ReLU Neural Networks, 19th International Conference on Software Engineering and Formal Methods (SEFM), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 463-480, ISSN: 0302-9743

Conference paper

Belardinelli F, Lomuscio A, Murano A, Rubin Set al., 2020, Verification of multi-agent systems with public actions against strategy logic, Artificial Intelligence, Vol: 285, Pages: 1-29, ISSN: 0004-3702

Model checking multi-agent systems, in which agents are distributed and thus may have different observations of the world, against strategic behaviours is known to be a complex problem in a number of settings. There are traditionally two ways of ameliorating this complexity: imposing a hierarchy on the observations of the agents, or restricting agent actions so that they are observable by all agents. We study systems of the latter kind, since they are more suitable for modelling rational agents. In particular, we define multiagent systems in which all actions are public and study the model checking problem of such systems against Strategy Logic with equality, a very rich strategic logic that can express relevant concepts such as Nash equilibria, Pareto optimality, and due to the novel addition of equality, also evolutionary stable strategies. The main result is that the corresponding model checking problem is decidable.Keywords: Strategy Logic, Multi-agent systems, Imperfect Information, Verification, Formal Methods

Journal article

Henriksen P, Lomuscio A, 2020, Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search, European Conference on Artificial Intelligence (ECAI20)

Conference paper

Akintunde ME, Botoeva E, Kouvaros P, Lomuscio Aet al., 2020, Verifying strategic abilities of neural-symbolic multi-agent systems, Pages: 21-31

We investigate the problem of verifying the strategic properties of multi-agent systems equipped with machine learningbased perception units. We introduce a novel model of agents comprising both a perception system implemented via feedforward neural networks and an action selection mechanism implemented via traditional control logic. We define the verification problem for these systems against a bounded fragment of alternating-time temporal logic. We translate the verification problem on bounded traces into the feasibility problem of mixed integer linear programs and show the soundness and completeness of the translation. We show that the lower bound of the verification problem is PSPACE and the upper bound is coNEXPTIME. We present a tool implementing the compilation and evaluate the experimental results obtained on a complex scenario of multiple aircraft operating a recently proposed prototype for air-traffic collision avoidance.

Conference paper

Akintunde ME, Botoeva E, Kouvaros P, Lomuscio Aet al., 2020, Formal verification of neural agents in non-deterministic environments, Pages: 25-33, ISSN: 1548-8403

We introduce a model for agent-environment systems where the agents are implemented via feed-forward ReLU neural networks and the environment is non-deterministic. We study the verification problem of such systems against CTL properties. We show that verifying these systems against reachability properties is undecidable. We introduce a bounded fragment of CTL, show its usefulness in identifying shallow bugs in the system, and prove that the verification problem against specifications in bounded CTL is in coNExpTime and PSpace-hard. We present a novel parallel algorithm for MILP-based verification of agent-environment systems, present an implementation, and report the experimental results obtained against a variant of the VerticalCAS use-case.

Conference paper

Lomuscio A, Pirovano E, 2020, Parameterised verification of strategic properties in probabilistic multi-agent systems, Pages: 762-770, ISSN: 1548-8403

We present a framework for verifying strategic behaviour in an unbounded multi-agent system. We introduce a novel probabilistic semantics for parameterised multi-agent systems and define the corresponding verification problem against two probabilistic variants of alternating-time temporal logic. We define a verification procedure using an abstract model construction. We show that the procedure is complete for one variant of our specification language, and partial for the other. We present an implementation and report experimental results.

Conference paper

Akintunde ME, Botoeva E, Kouvaros P, Lomuscio Aet al., 2020, Verifying Strategic Abilities of Neural-symbolic Multi-agent Systems, 17th International Conference on Principles of Knowledge Representation and Reasoning (KR and R), Publisher: IJCAI-INT JOINT CONF ARTIF INTELL, Pages: 22-32

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00306568&limit=30&person=true