198 results found
Akintunde ME, Botoeva E, Kouvaros P, et al., 2022, Formal verification of neural agents in non-deterministic environments, AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, Vol: 36, ISSN: 1387-2532
Senanayake R, Fremont DJ, Kochenderfer MJ, et al., 2021, Guest Editorial: Special issue on robust machine learning, MACHINE LEARNING, ISSN: 0885-6125
Belardinelli F, Knight S, Lomuscio A, et al., 2021, Reasoning about agents that may know other agents’ strategies, 30th International Joint Conference on Artificial Intelligence (IJCAI21), Publisher: IJCAI, ISSN: 1045-0823
We study the semantics of knowledge in strategic reasoning. Most existing works either implicitly assume that agents do not know one another’s strategies, or that all strategies are known to all; and some works present inconsistent mixes of both features. We put forward a novel semantics for Strategy Logic with Knowledge that cleanly models whose strategies each agent knows. We study how adopting this semantics impacts agents’ knowledge and strategic ability, as well as the complexity of the model-checking problem.
Batten B, Kouvaros P, Lomuscio A, et al., 2021, Efficient neural network verification via layer-based semidefinite relaxations and linear cuts, International Joint Conference on Artificial Intelligence (IJAC 2021), Publisher: IJCAI
We introduce an efficient and tight layer-based semidefinite relaxation for verifying local robust-ness of neural networks. The improved tightness is the result of the combination between semidefinite relaxations and linear cuts. We obtain a computationally efficient method by decomposing the semidefinite formulation into layer wise constraints. By leveraging on chordal graph decompositions, we show that the formulation here presented is provably tighter than current approaches. Experiments on a set of benchmark networks show that the approach here proposed enables the verification of more instances compared to other relaxation methods. The results also demonstrate that the SDP relaxation here proposed is one order of magnitude faster than previous SDP methods.
Kouvaros P, Kyono T, Leofante F, et al., 2021, Formal Analysis of Neural Network-Based Systems in the Aircraft Domain, Pages: 730-740, ISSN: 0302-9743
Neural networks are being increasingly used for efficient decision making in the aircraft domain. Given the safety-critical nature of the applications involved, stringent safety requirements must be met by these networks. In this work we present a formal study of two neural network-based systems developed by Boeing. The Venus verifier is used to analyse the conditions under which these systems can operate safely, or generate counterexamples that show when safety cannot be guaranteed. Our results confirm the applicability of formal verification to the settings considered.
Belardinelli F, Lomuscio A, Murano A, et al., 2020, Verification of multi-agent systems with public actions against strategy logic, Artificial Intelligence, Vol: 285, Pages: 1-29, ISSN: 0004-3702
Model checking multi-agent systems, in which agents are distributed and thus may have different observations of the world, against strategic behaviours is known to be a complex problem in a number of settings. There are traditionally two ways of ameliorating this complexity: imposing a hierarchy on the observations of the agents, or restricting agent actions so that they are observable by all agents. We study systems of the latter kind, since they are more suitable for modelling rational agents. In particular, we define multiagent systems in which all actions are public and study the model checking problem of such systems against Strategy Logic with equality, a very rich strategic logic that can express relevant concepts such as Nash equilibria, Pareto optimality, and due to the novel addition of equality, also evolutionary stable strategies. The main result is that the corresponding model checking problem is decidable.Keywords: Strategy Logic, Multi-agent systems, Imperfect Information, Verification, Formal Methods
Henriksen P, Lomuscio A, 2020, Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search, European Conference on Artificial Intelligence (ECAI20)
Botoeva E, Kouvaros P, Kronqvist J, et al., 2020, Efficient Verification of ReLU-Based Neural Networks via Dependency Analysis, 34th AAAI Conference on Artificial Intelligence / 32nd Innovative Applications of Artificial Intelligence Conference / 10th AAAI Symposium on Educational Advances in Artificial Intelligence, Publisher: ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, Pages: 3291-3299, ISSN: 2159-5399
Belardinelli F, Lomuscio A, Yu E, 2020, Model Checking Temporal Epistemic Logic under Bounded Recall, 34th AAAI Conference on Artificial Intelligence / 32nd Innovative Applications of Artificial Intelligence Conference / 10th AAAI Symposium on Educational Advances in Artificial Intelligence, Publisher: ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, Pages: 7071-7078, ISSN: 2159-5399
Belardinelli F, Lomuscio A, Malvone A, 2019, An abstraction-based method for verifying strategic properties in multi-agent systems with imperfect information, Thirty-Third AAAI Conference on Artificial Intelligence, Publisher: Association for the Advancement of Artificial Intelligence, Pages: 6030-6037
We investigate the verification of Multi-agent Systemsagainst strategic properties expressed in Alternating-timeTemporal Logic under the assumptions of imperfect informa-tion and perfect recall. To this end, we develop a three-valuedsemantics for concurrent game structures upon which we de-fine an abstraction method. We prove that concurrent gamestructures with imperfect information admit perfect informa-tion abstractions that preserve three-valued satisfaction. Fur-ther, we present a refinement procedure to deal with caseswhere the value of a specification is undefined. We illustratethe overall procedure in a variant of the Train Gate Controllerscenario under imperfect information and perfect recall.
Akintunde M, Kevorchian A, Lomuscio A, et al., 2019, Verification of RNN-based neural agent-environment systems, 33rd AAAI Conference on Artificial Intelligence, Publisher: Association for the Advancement of Artificial Intelligence, ISSN: 2159-5399
We introduce agent-environment systems where the agent is stateful and executing a ReLU recurrent neural network. We define and study their verification problem by providing equivalences of recurrent and feed-forward neural networks on bounded execution traces. We give a sound and complete procedure for their verification against properties specified in a simplified version of LTL on bounded executions. We present an implementation and discuss the experimental results obtained.
Belardinelli F, Lomuscio A, Murano A, et al., 2019, Imperfect Information in Alternating-Time Temporal Logic on Finite Traces, 22nd International Conference on Principles and Practice of Multi-Agent Systems (PRIMA), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 469-477, ISSN: 0302-9743
Lomuscio AR, akitunde M, maganti L, et al., 2018, Reachability analysis for neural agent-environment systems, 16th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: Association for the Advancement of Artificial Intelligence
We develop a novel model for studying agent-environmentsystems, where the agents are implemented via feed-forwardReLU neural networks. We provide a semantics and developa method to verify automatically that no unwanted states arereached by the system during its evolution. We study severalreachability problems for the system, ranging from one-stepreachability, to fixed multi-step and arbitrary-step to studythe system evolution. We also study the decision problem ofwhether an agent, realised via feed-forward ReLU networkswill perform an action in a system run. Whenever possible,we give tight complexity bounds to decision problems intro-duced. We automate the various reachability problems stud-ied by recasting them as mixed-integer linear programmingproblems. We present an implementation and discuss the ex-perimental results obtained on a range of test cases.
Belardinelli F, Lomuscio A, Malvone V, 2018, Approximating perfect recall when model checking strategic abilities, Knowledge Representation and Reasoning, Publisher: AAAI
We investigate the notion of bounded recall in the contextof model checking AT L∗and AT L specifications in multiagent systems with imperfect information. We present a novelthree-valued semantics for AT L∗, respectively AT L, underbounded recall and imperfect information, and study the corresponding model checking problems. Most importantly, weshow that the three-valued semantics constitutes an approximation with respect to the traditional two-valued semantics.In the light of this we construct a sound, albeit partial, algorithm for model checking two-valued perfect recall via itsapproximation as three-valued bounded recall.
Lomuscio AR, Vardi MY, 2018, 4th International Workshop on Strategic Reasoning (SR 2016) Preface, SR 2016 : 4th International Workshop on Strategic Reasoning, Publisher: ACADEMIC PRESS INC ELSEVIER SCIENCE, Pages: 615-615, ISSN: 0890-5401
Čermák P, Lomuscio A, Mogavero F, et al., 2018, Practical verification of multi-agent systems against Slk specifications, Information and Computation, Vol: 261, Pages: 588-614, ISSN: 0890-5401
We introduce Strategy Logic with Knowledge, a novel formalism to reason about knowledge and strategic ability in memoryless multi-agent systems with incomplete information. We exemplify its expressive power; we define the model checking problem for the logic and show that it is PSpace-complete. We propose a labelling algorithm for solving the verification problem that we show is amenable to symbolic implementation. We introduce Image 1, an extension of the open-source model checker MCMAS, implementing the proposed algorithm. We report the benchmarks obtained on a number of scenarios from the literature, including the dining cryptographers protocol.
Lomuscio AR, Belardinelli F, malvone V, 2018, Approximating perfect recall when model checking strategic abilities, 16th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: Association for the Advancement of Artificial Intelligence
We investigate the notion of bounded recall in the contextof model checkingATL∗andATLspecifications in multi-agent systems with imperfect information. We present a novelthree-valued semantics forATL∗, respectivelyATL, underbounded recall and imperfect information, and study the cor-responding model checking problems. Most importantly, weshow that the three-valued semantics constitutes an approxi-mation with respect to the traditional two-valued semantics.In the light of this we construct a sound, albeit partial, al-gorithm for model checking two-valued perfect recall via itsapproximation as three-valued bounded recall.
Belardinelli F, Lomuscio A, Murano A, et al., 2018, Alternating-time temporal logic on finite traces, International Joint Conference on Artificial Intelligence, Pages: 77-83, ISSN: 1045-0823
© 2018 International Joint Conferences on Artificial Intelligence. All right reserved. We develop a logic-based technique to analyse finite interactions in multi-agent systems. We introduce a semantics for Alternating-time Temporal Logic (for both perfect and imperfect recall) and its branching-time fragments in which paths are finite instead of infinite. We study validities of these logics and present optimal algorithms for their model-checking problems in the perfect recall case.
Belardinelli F, Murano A, Lomuscio A, et al., 2018, Decidable verification of multi-agent systems with bounded private actions, Autonomous Agents and Multiagent Systems, Pages: 1865-1867, ISSN: 1548-8403
Hasani R, Haerle D, Baumgartner C, et al., 2017, Compositional neural-network modeling of complex analog circuits, Proceedings of the 30th International Conference on Neural Networks, Publisher: IEEE, Pages: 2235-2242
We introduce CompNN, a compositional method for the construction of a neural-network (NN) capturing the dynamic behavior of a complex analog multiple-input multiple-output (MIMO) system. CompNN first learns for each input/output pair (i, j), a small-sized nonlinear auto-regressive neural network with exogenous input (NARX) representing the transfer-function h ij . The training dataset is generated by varying input i of the MIMO, only. Then, for each output j, the transfer functions h ij are combined by a time-delayed neural network (TDNN) layer, f j . The training dataset for f j is generated by varying all MIMO inputs. The final output is f = (f 1 , ..., f n ). The NNs parameters are learned using Levenberg-Marquardt back-propagation algorithm. We apply CompNN to learn an NN abstraction of a CMOS band-gap voltage-reference circuit (BGR). First, we learn the NARX NNs corresponding to trimming, load-jump and line-jump responses of the circuit. Then, we recompose the outputs by training the second layer TDNN structure. We demonstrate the performance of our learned NN in the transient simulation of the BGR by reducing the simulation-time by a factor of 17 compared to the transistor-level simulations. CompNN allows us to map particular parts of the NN to specific behavioral features of the BGR. To the best of our knowledge, CompNN is the first method to learn the NN of an analog integrated circuit (MIMO system) in a compositional fashion.
Belardinelli F, Lomuscio A, Murano A, et al., 2017, Verification of Multi-agent Systems with Imperfect Information and Public Actions, 16th Conference on Autonomous Agents and MultiAgent Systems
Belardinelli F, Kouvaros P, Lomuscio A, 2017, Parameterised verification of data-aware multi-agent systems, International Joint Conference on Artificial Intelligence, Pages: 98-104, ISSN: 1045-0823
We introduce parameterised data-aware multiagent systems, a formalism to reason about the temporal properties of arbitrarily large collections of homogeneous agents, each operating on an infinite data domain. We show that their parameterised verification problem is semi-decidable for classes of interest. This is demonstrated by separately addressing the unboundedness of the number of agents and the data domain. In doing so we reduce the parameterised model checking problem for these systems to that of parameterised verification for interleaved interpreted systems. We illustrate the expressivity of the formal model by modelling English auctions with an unbounded number of bidders on unbounded data.
Belardinelli F, Lomuscio A, Murano A, et al., 2017, Verification of Broadcasting Multi-Agent Systems against an Epistemic Strategy Logic, International Joint Conference on Artificial Intelligence
Lomuscio AR, Qu H, Raimondi F, 2017, MCMAS: an open-source model checker for the verification of multi-agent systems, International Journal on Software Tools for Technology Transfer, ISSN: 1433-2779
We present MCMAS, a model checker for the verification of multi-agent systems. MCMAS supports efficient symbolic techniques for the verification of multi-agent systems against specifications representing temporal, epistemic and strategic properties. We present the underlying semantics of the specification language supported and the algorithms implemented in MCMAS, including its fairness and counterexample generation features. We provide a detailed description of the implementation. We illustrate its use by discussing a number of examples and evaluate its performance by comparing it against other model checkers for multi-agent systems on a common case study.
Belardinelli F, Lomuscio A, 2017, Agent-based Abstractions for Verifying Alternating-time Temporal Logic with Imperfect Information, 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Publisher: ASSOC COMPUTING MACHINERY, Pages: 1259-1267
Ezekiel J, Lomuscio AR, 2016, Combining fault injection and model checking to verify fault tolerance, recoverability, and diagnosability in multi-agent systems, Information and Computation, Vol: 254, Pages: 167-194, ISSN: 1090-2651
We present an automated technique that combines fault injection with model checking to verify fault tolerance, recoverability, and diagnosability in multi-agent systems. We define a general method for mutating a multi-agent systems model representing correct behaviour by injecting faults into it, and specification patterns based on temporal-epistemic formulas to reason about the correct and faulty behaviours of the mutated model. The technique is implemented in a toolkit that can be used for injecting automatically faults into a multi-agent systems program. The usefulness of the methodology is demonstrated by injecting a number of faults into a model of the IEEE 802.5 token ring LAN protocol and analysing the protocol's fault tolerance, by verifying a number of temporal-epistemic specifications.
Kouvaros P, Lomuscio A, 2016, Parameterised model checking for alternating-time temporal logic, 22nd European Conference on Artificial Intelligence (ECAI), Publisher: IOS Press, Pages: 1230-1238, ISSN: 0922-6389
We investigate the parameterised model checking problem for specifications expressed in alternating-time temporal logic. We introduce parameterised concurrent game structures representing infinitely many games with different number of agents. We introduce a parametric variant of ATL to express properties of the system irrespectively of the number of agents present in the system. While the parameterised model checking problem is undecidable, we define a special class of systems on which we develop a sound and complete counter abstraction technique. We illustrate the methodology here devised on the prioritised version of the train-gate-controller.
Lomuscio A, Vardi MY, 2016, Preface, 4th International Workshop on Strategic Reasoning, ISSN: 2075-2180
Boureanu I, Kouvaros P, Lomuscio A, 2016, Verifying Security Properties in Unbounded Multiagent Systems, 2016 International Conference on Autonomous Agents & Multiagent Systems (AAMAS '16), Publisher: ACM, Pages: 1209-1217
We study the problem of analysing the security for an unbounded number of concurrent sessions of a cryptographic protocol. Our formal model accounts for an arbitrary number of agents involved in a protocol-exchange which is subverted by a Dolev-Yao attacker. We define the parameterised model checking problem with respect to security requirements expressed in temporal-epistemic logics. We formulate sufficient conditions for solving this problem, by analysing several finite models of the system. We primarily explore authentication and key-establishment as part of a larger class of protocols and security requirements amenable to our methodology. We introduce a tool implementing the technique, and we validate it by verifying the NSPK and ASRPC protocols.
Kouvaros P, Lomuscio A, 2016, Formal Verification of Opinion Formation in Swarms, 2016 International Conference on Autonomous Agents & Multiagent Systems (AAMAS '16), Publisher: ACM, Pages: 1200-1208
We investigate the formal verification of consensus protocols in swarm systems composed of arbitrary many agents. We use templates to define the behaviour of the agents in an opinion dynamics setting and formulate their verification in terms of the associated parameterised model checking problem. We define a finite abstract model that we show to simulate swarms of any size, thereby precisely encoding any concrete instantiation of the swarm. We give an automatic procedure for verifying temporal-epistemic properties of consensus protocols by model checking the associated finite abstract model. We present a toolkit that can be used to generate the abstract model automatically and verify a given protocol by symbolic model checking. We use the toolkit to verify the correctness of majority rule protocols in arbitrary large swarms.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.