Imperial College London

DrAntonioRago

Faculty of EngineeringDepartment of Computing

Research Associate
 
 
 
//

Contact

 

a.rago Website

 
 
//

Location

 

417Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

35 results found

Jiang J, Leofante F, Rago A, Toni Fet al., 2022, Formalising the Robustness of Counterfactual Explanations for Neural Networks, The 37th AAAI Conference on Artificial Intelligence

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2022, Descriptive accuracy in explanations: the case of probabilistic classifiers, 15th International Conference on Scalable Uncertainty Management (SUM 2022)

A user receiving an explanation for outcomes produced byan artificially intelligent system expects that it satisfies the key propertyof descriptive accuracy (DA), i.e. that the explanation contents are incorrespondence with the internal working of the system. Crucial as thisproperty appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions offormalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialecticalDA, using the family of probabilistic classifiers as the context for ouranalysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attributionmethods from the literature and a novel form of explanation that wepropose and complement our analysis with experiments carried out on avaried selection of concrete probabilistic classifiers.

Conference paper

Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2022, Neural QBAFs: explaining neural networks under LRP-based argumentation frameworks, International Conference of the Italian Association for Artificial Intelligence, Publisher: Springer International Publishing, Pages: 429-444, ISSN: 0302-9743

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Conference paper

Jiang J, Rago A, Toni F, 2022, Should counterfactual explanations always be data instances?, XLoKR 2022: The Third Workshop on Explainable Logic-Based Knowledge Representation

Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value.

Conference paper

Irwin B, Rago A, Toni F, 2022, Argumentative forecasting, AAMAS 2022, Publisher: ACM, Pages: 1636-1638

We introduce the Forecasting Argumentation Framework (FAF), anovel argumentation framework for forecasting informed by re-cent judgmental forecasting research. FAFs comprise update frame-works which empower (human or artificial) agents to argue overtime with and about probability of scenarios, whilst flagging per-ceived irrationality in their behaviour with a view to improvingtheir forecasting accuracy. FAFs include three argument types withfuture forecasts and aggregate the strength of these arguments toinform estimates of the likelihood of scenarios. We describe animplementation of FAFs for supporting forecasting agents.

Conference paper

Irwin B, Rago A, Toni F, 2022, Forecasting argumentation frameworks, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, ISSN: 2334-1033

We introduce Forecasting Argumentation Frameworks(FAFs), a novel argumentation-based methodology forforecasting informed by recent judgmental forecastingresearch. FAFs comprise update frameworks which empower(human or artificial) agents to argue over time about theprobability of outcomes, e.g. the winner of a politicalelection or a fluctuation in inflation rates, whilst flaggingperceived irrationality in the agents’ behaviour with a viewto improving their forecasting accuracy. FAFs include fiveargument types, amounting to standard pro/con arguments,as in bipolar argumentation, as well as novel proposalarguments and increase/decrease amendment arguments. Weadapt an existing gradual semantics for bipolar argumen-tation to determine the aggregated dialectical strength ofproposal arguments and define irrational behaviour. We thengive a simple aggregation function which produces a finalgroup forecast from rational agents’ individual forecasts.We identify and study properties of FAFs and conductan empirical evaluation which signals FAFs’ potential toincrease the forecasting accuracy of participants.

Conference paper

Rago A, Baroni P, Toni F, 2022, Explaining causal models with argumentation: the case of bi-variate reinforcement, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, ISSN: 2334-1033

Causal models are playing an increasingly important role inmachine learning, particularly in the realm of explainable AI.We introduce a conceptualisation for generating argumenta-tion frameworks (AFs) from causal models for the purposeof forging explanations for the models’ outputs. The concep-tualisation is based on reinterpreting desirable properties ofsemantics of AFs as explanation moulds, which are meansfor characterising the relations in the causal model argumen-tatively. We demonstrate our methodology by reinterpretingthe property of bi-variate reinforcement as an explanationmould to forge bipolar AFs as explanations for the outputs ofcausal models. We perform a theoretical evaluation of theseargumentative explanations, examining whether they satisfy arange of desirable explanatory and argumentative propertie

Conference paper

Čyras K, Kampik T, Cocarascu O, Rago A, Amgoud L, Baroni P, Bassiliades N, Black E, Calegari R, Collins A, Delobelle J, Fan X, García AJ, Hunter A, Kakas A, Kökciyan N, Liao B, Luo J, Morveli-Espinoza M, Mosca F, Nieves JC, Panisson AR, Parsons S, Potyka N, Prakken H, Rienstra T, Rodrigues O, Saribatur ZG, Sassoon I, Sklar E, Straßer C, Tohme F, Ulbricht M, Villata S, Vassiliades A, Wallner J, van Woerkom W, Molinet Bet al., 2022, Preface, ISSN: 1613-0073

Conference paper

Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2021, LRP-based argumentative explanations for neural networks, XAI.it 2021 - Italian Workshop on Explainable Artificial Intelligence, Pages: 71-84, ISSN: 1613-0073

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2021, Influence-driven explanations for bayesian network classifiers, PRICAI 2021, Publisher: Springer Verlag, Pages: 88-100, ISSN: 0302-9743

We propose a novel approach to buildinginfluence-driven ex-planations(IDXs) for (discrete) Bayesian network classifiers (BCs). IDXsfeature two main advantages wrt other commonly adopted explanationmethods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing adeep, rather than shallow, account of theBCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count to-wards explanations. Our approach is thusflexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances cap-turing existing approaches. We demonstrate IDXs’ capability to explainvarious forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

Conference paper

Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2021, Argumentation as a framework for interactive explanations for recommendations, KR 2020, 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI, Pages: 805-815, ISSN: 2334-1033

As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

Conference paper

Cyras K, Rago A, Emanuele A, Baroni P, Toni Fet al., 2021, Argumentative XAI: a survey, The 30th International Joint Conference on Artificial Intelligence (IJCAI-21), Publisher: International Joint Conferences on Artificial Intelligence, Pages: 4392-4399

Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. We also lay out a roadmap for future work.

Conference paper

Cocarascu O, Cyras K, Rago A, Toni Fet al., 2021, Mining property-driven graphical explanations for data-centric AI from argumentation frameworks, Human-Like Machine Intelligence, Pages: 93-113, ISBN: 9780198862536

Book chapter

Albini E, Baroni P, Rago A, Toni Fet al., 2021, Interpreting and explaining pagerank through argumentation semantics, Intelligenza Artificiale, Vol: 15, Pages: 17-34, ISSN: 1724-8035

In this paper we show how re-interpreting PageRank as an argumentation semantics for a bipolar argumentation framework empowers its explainability. After showing that PageRank, naively re-interpreted as an argumentation semantics for support frameworks, fails to satisfy some generally desirable properties, we propose a novel approach able to reconstruct PageRank as a gradual semantics of a suitably defined bipolar argumentation framework, while satisfying these properties. We then show how the theoretical advantages afforded by this approach also enjoy an enhanced explanatory power: we propose several types of argument-based explanations for PageRank, each of which focuses on different aspects of the algorithm and uncovers information useful for the comprehension of its results.

Journal article

Rago A, Cocarascu O, Bechlivanidis C, Lagnado D, Toni Fet al., 2021, Argumentative explanations for interactive recommendations, Artificial Intelligence, Vol: 296, Pages: 1-22, ISSN: 0004-3702

A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.

Journal article

Dejl A, He P, Mangal P, Mohsin H, Surdu B, Voinea E, Albini E, Lertvittayakumjorn P, Rago A, Toni Fet al., 2021, Argflow: A toolkit for deep argumentative explanations for neural networks, AAMAS, Pages: 1749-1751, ISSN: 1548-8403

In recent years, machine learning (ML) models have been successfully applied in a variety of real-world applications. However, they are often complex and incomprehensible to human users. This can decrease trust in their outputs and render their usage in critical settings ethically problematic. As a result, several methods for explaining such ML models have been proposed recently, in particular for black-box models such as deep neural networks (NNs). Nevertheless, these methods predominantly explain outputs in terms of inputs, disregarding the inner workings of the ML model computing those outputs. We present Argflow, a toolkit enabling the generation of a variety of 'deep' argumentative explanations (DAXs) for outputs of NNs on classification tasks.

Conference paper

Rago A, Albini E, Baroni P, Toni Fet al., 2021, Influence-driven explanations for bayesian network classifiers, Publisher: arXiv

One of the most pressing issues in AI in recent years has been the need toaddress the lack of explainability of many of its models. We focus onexplanations for discrete Bayesian network classifiers (BCs), targeting greatertransparency of their inner workings by including intermediate variables inexplanations, rather than just the input and output variables as is standardpractice. The proposed influence-driven explanations (IDXs) for BCs aresystematically generated using the causal relationships between variableswithin the BC, called influences, which are then categorised by logicalrequirements, called relation properties, according to their behaviour. Theserelation properties both provide guarantees beyond heuristic explanationmethods and allow the information underpinning an explanation to be tailored toa particular context's and user's requirements, e.g., IDXs may be dialecticalor counterfactual. We demonstrate IDXs' capability to explain various forms ofBCs, e.g., naive or multi-label, binary or categorical, and also integraterecent approaches to explanations for BCs from the literature. We evaluate IDXswith theoretical and empirical analyses, demonstrating their considerableadvantages when compared with existing explanation methods.

Working paper

Rago A, Russo F, Albini E, Baroni P, Toni Fet al., 2021, Forging Argumentative Explanations from Causal Models, ISSN: 1613-0073

We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for models' outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the extracted bipolar AFs may be used as relation-based explanations for the outputs of causal models.

Conference paper

Albini E, Baroni P, Rago A, Toni Fet al., 2020, PageRank as an Argumentation Semantics, Biennial International Conference on Computational Models of Argument (COMMA), Publisher: IOS PRESS, Pages: 55-66, ISSN: 0922-6389

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2020, Relation-Based Counterfactual Explanations for Bayesian Network Classifiers, The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)

Conference paper

Cocarascu O, Rago A, Toni F, 2020, Explanation via Machine Arguing, Pages: 53-84, ISBN: 9783030600662

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this tutorial we will overview recent approaches showing how these challenges can be addressed by utilising forms of machine arguing as the scaffolding underpinning explanations that are delivered to users. Machine arguing amounts to the deployment of methods from computational argumentation in AI with suitably mined argumentation frameworks, which provide abstractions of “debates”. Computational argumentation has been widely used to support applications requiring information exchange between AI systems and users, facilitated by the fact that the capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in different ways, namely: by building explainable systems with argumentative foundations from linguistic data focusing on reviews), or by extracting argumentative reasoning from existin

Book chapter

Cocarascu O, Rago A, Toni F, 2019, From formal argumentation to conversational systems, 1st Workshop on Conversational Interaction Systems (WCIS 2019)

Arguing is amenable to humans and argumentation serves as anatural form of interaction in many settings. Several formal mod-els of argumentation have been proposed in the AI literature asabstractions of various forms of debates. We show how these mod-els can serve as the backbone of conversational systems that canexplain machine-computed outputs. These systems can engage inconversations with humans following templates instantiated onargumentation models that are automatically obtained from thedata analysis underpinning the machine-computed outputs. Asan illustration, we consider one such argumentation-empoweredconversational system and exemplify its use and benefits in twodifferent domains, for recommending movies and hotels based onthe aggregation of information drawn from reviews.

Conference paper

Cocarascu O, Rago A, Toni F, 2019, Extracting dialogical explanations for review aggregations with argumentative dialogical agents, International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), Publisher: International Foundation for Autonomous Agents and Multiagent Systems

The aggregation of online reviews is fast becoming the chosen method of quality control for users in various domains, from retail to entertainment. Consequently, fair, thorough and explainable aggregation of reviews is increasingly sought-after. We consider the movie review domain, and in particular Rotten Tomatoes' ubiquitous (and arguably over-simplified) aggregation method, the Tomatometer Score (TS). For a movie, this amounts to the percentage of critics giving the movie a positive review. We define a novel form of argumentative dialogical agent (ADA) for explaining the reasoning within the reviews. ADA integrates: 1.) NLP with reviews to extract a Quantitative Bipolar Argumentation Framework (QBAF) for any chosen movie to provide the underlying structure of explanations, and 2.) gradual semantics for QBAFs for deriving a dialectical strength measure for movies, as an alternative to the TS, satisfying desirable properties for obtaining explanations. We evaluate ADA using some prominent NLP methods and gradual semantics for QBAFs. We show that they provide a dialectical strength which is comparable with the TS, while at the same time being able to provide dialogical explanations of why a movie obtained its strength via interactions between the user and ADA.

Conference paper

Baroni P, Rago A, Toni F, 2019, From fine-grained properties to broad principles for gradual argumentation: A principled spectrum, International Journal of Approximate Reasoning, Vol: 105, Pages: 252-286, ISSN: 0888-613X

The study of properties of gradual evaluation methods in argumentation has received increasing attention in recent years, with studies devoted to various classes of frameworks/ methods leading to conceptually similar but formally distinct properties in different contexts. In this paper we provide a novel systematic analysis for this research landscape by making three main contributions. First, we identify groups of conceptually related properties in the literature, which can be regarded as based on common patterns and, using these patterns, we evidence that many further novel properties can be considered. Then, we provide a simplifying and unifying perspective for these groups of properties by showing that they are all implied by novel parametric principles of (either strict or non-strict) balance and monotonicity. Finally, we show that (instances of) these principles (and thus the group, literature and novel properties that they imply) are satisfied by several quantitative argumentation formalisms in the literature, thus confirming the principles' general validity and utility to support a compact, yet comprehensive, analysis of properties of gradual argumentation.

Journal article

Rago A, 2019, Gradual Evaluation in Argumentation Frameworks: Methods, Properties and Applications

Gradual evaluation methods in argumentation frameworks provide semantics for assessing the gradual acceptance of arguments, differing from the qualitative semantics that have been used in argument evaluation since argumentation’s conception. These methods and their semantics are wide-ranging; they comprise those for group acceptance, probabilistic measures and game-theoretical strength, amongst many others. This affords numerous application areas and so the requisite behaviour for each needs to be justified by theoretical proofs of useful properties for a specific application.Our contributions to this field span three interweaving sub-categories, namely methods, properties and applications. For gradual evaluation methods, we develop a number of novel and useful methods themselves. For each method we detail the semantics’ and the frameworks’ definitions then undertake theoretical evaluations based on their properties, before applications targeting real-world problems are suggested for each method. As for gradual evaluation properties, we undertake a systematic analysis for this research landscape by first identifying groups of conceptually related properties in the literature and provide a simplifying and unifying perspective for these properties by showing that all the considered literature properties are implied by four, novel parametric principles. We then validate these principles by showing that they are satisfied by several quantitative argumentation formalisms in the literature. We also instantiate the extensive number of implied properties of these principles which are not present in the literature. These properties are also used to extract argumentation explanations for recommendations in recommender systems, a novel concept and application.

Thesis dissertation

Cocarascu O, Cyras K, Rago A, Toni Fet al., 2018, Explaining with Argumentation Frameworks Mined from Data, The International Workshop on Dialogue, Explanation and Argumentation in Human-Agent Interaction (DEXAHAI)

Conference paper

Baroni P, Borsato S, Rago A, Toni Fet al., 2018, The "Games of Argumentation" web platform, 7th International Conference on Computational Models of Argument (COMMA 2018), Publisher: IOS Press, Pages: 447-448, ISSN: 0922-6389

This demo presents the web system “Games of Argumentation”, which allows users to build argumentation graphs and examine them in a game-theoretical manner using up to three different evaluation techniques. The concurrent evaluations of arguments using different techniques, which may be qualitative or quantitative, provides a significant aid to users in both understanding game-theoretical argumentation semantics and pinpointing their differences from alternative semantics, traditional or otherwise, to differentiate between them.

Conference paper

Rago A, Baroni P, Toni F, 2018, On instantiating generalised properties of gradual argumentation frameworks, SUM 2018, Publisher: Springer Verlag, Pages: 243-259, ISSN: 0302-9743

Several gradual semantics for abstract and bipolar argumentation have been proposed in the literature, ascribing to each argument a value taken from a scale, i.e. an ordered set. These values somewhat match the arguments’ dialectical status and provide an indication of their dialectical strength, in the context of the given argumentation framework. These research efforts have been complemented by formulations of several properties that these gradual semantics may satisfy. More recently a synthesis of many literature properties into more general groupings based on parametric definitions has been proposed. In this paper we show how this generalised parametric formulation enables the identification of new properties not previously considered in the literature and discuss their usefulness to capture alternative requirements coming from different application contexts.

Conference paper

Rago A, Cocarascu O, Toni F, 2018, Argumentation-based recommendations: fantastic explanations and how to find them, The Twenty-Seventh International Joint Conference on Artificial Intelligence, (IJCAI 2018), Pages: 1949-1955

A significant problem of recommender systems is their inability to explain recommendations, resulting in turn in ineffective feedback from users and the inability to adapt to users’ preferences. We propose a hybrid method for calculating predicted ratings, built upon an item/aspect-based graph with users’ partially given ratings, that can be naturally used to provide explanations for recommendations, extracted from user-tailored Tripolar Argumentation Frameworks (TFs). We show that our method can be understood as a gradual semantics for TFs, exhibiting a desirable, albeit weak, property of balance. We also show experimentally that our method is competitive in generating correct predictions, compared with state-of-the-art methods, and illustrate how users can interact with the generated explanations to improve quality of recommendations.

Conference paper

Rago A, Baroni P, Toni F, 2018, Scalable uncertainty management, Scalable Uncertainty Management (SUM 2018), Publisher: Springer Verlag, ISSN: 0302-9743

Several gradual semantics for abstract and bipolar argumentation have been proposed in the literature, ascribing to each argument a value taken from a scale, i.e. an ordered set. These values somewhat match the arguments’ dialectical status and provide an indication of their dialectical strength, in the context of the given argumentation framework. These research efforts have been complemented by formulations of several properties that these gradual semantics may satisfy. More recently a synthesis of many literature properties into more general groupings based on parametric definitions has been proposed. In this paper we show how this generalised parametric formulation enables the identification of new properties not previously considered in the literature and discuss their usefulness to capture alternative requirements coming from different application contexts.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01019145&limit=30&person=true