HLC supported projects

 

HLC Kick-start project (Year 1)
Project TitleEvaluating dialectical explanations for recommendations
 PI  Prof Francesca Toni (Imperial College London, Computing)
 Other partners Dave Lagnado and Christos Bechlivanidis (UCL, Experimental Psychology) - CoIs
Antonio Rago (Imperial College London, Computing) – PostDoc
Summary of the project

The project aimed at addressing the lack of transparency of AI techniques, e.g. machine learning algorithms or recommender systems, one of the most pressing issues in the field, especially given the ever-increasing integration of AI into everyday systems used by experts and non-experts alike, and the need to explain how and/or why these systems compute outputs, for any or for specific inputs. The need for explainability arises for a number of reasons: an expert may require more transparency to justify outputs of an AI system, especially in safety-critical situations, while a non-expert may place more trust in an AI system providing basic (rather than no) explanations, regarding, for example, films suggested by a recommender system. The main aim of this project was to conduct experiments to determine whether and which computed dialectical explanations, extracted from argumentation graphs for explaining recommendations, are useful to humans and whether human feedback can improve the outputs of the recommender system. The planned experiments were identified as useful to confirm or falsify the hypothesis that argumentation can serve as a paradigm for human-machine interaction, in the specific setting of recommender systems and argumentative explanations. The project resulted in a number of publications, including:

- Argumentation as a Framework for Interactive Explanations for Recommendations. Antonio Rago, Oana Cocarascu, Christos Bechlivanidis and Francesca Toni. KR 2020. https://proceedings.kr.org/2020/83/

- Mining Property-driven Graphical Explanations for Data-centric AI from Argumentation Frameworks. Oana Cocarascu, Kristijonas Cyras, Antonio Rago, Francesca Toni, in Human-Like Machine Intelligence edited by Stephen Muggleton and Nick Chater. Oxford University Press., 2021. 

- Argumentative Explanations for Interactive Recommendations. Antonio Rago, Oana Cocarascu, Christos Bechlivanidis, David Lagnado and Francesca Toni  (Submitted to AIJ)

 Keywords  Explainable AI, Computational argumentation, Recommender systems
 Links  Final report
Project competed and the final report was reviwed
HLC Kick-start project (Year 1)
Project TitleSocial Sensing
 PI Prof Patrick G.T. Healey (Queen Mary University of London)
 Other partners Dr Hamed Haddadi (Imperial College London) - CoI
Lida Theodoru (Queen Mary University of London) – PostDoc
Summary of the project

There are many contexts in which it would be useful to have a better understanding of human interaction ‘in-the-wild’. In particular there is clear evidence that frequency and quality of social interaction are critical factors in determining physical and mental health outcomes (including a substantial impact on mortality (Landis-Holt, 2010). However current methods for assessing social engagement are coarse grained and rely heavily on subjective self-report. This project assessed the feasibility of developing unobtrusive, quantitative methods for capturing the frequency, quality and context of everyday social interactions. The aim was to identify new ways of enabling machines to perceive, recognise and engage with basic patterns of human interaction to enable more effective communication and collaboration. The approach used is based on results obtained from work on optical motion capture of live conversation that people move in characteristic ways during face-to-face conversation (Battersby and Healey, 2010; Healey, Plant, Howes and Lavelle 2015). In particular, speaker’s hand movements during increase during conversation whereas their addressees move their hands significantly less than normal. This leads to the hypothesis that the frequency and degree of engagement interaction might have distinct motion signatures. If correct, this would provide a way to sense patterns of social interaction without requiring explicit self-report or potentially intrusive audio or video recordings. While a great deal of attention has been paid to sensing physical activity using motion sensors it has not been applied to capturing the quality of social activity in this way. For example, the Avon Longitudinal Study of Parents and Children (ALSPAC) and UK Biobank have wrist-worn accelerometer data but do not contain significant information on social interaction and have not been analysed to detect this (Willets et. al. 2018, Mattocks et. al. 2008).

Follow-on Grant Applications:
- Healey (PI) QMUL / Ove Arup Partners “Sensing Social Ecologies” Bid to Alan Turing Institute Urban Analytics Calls. 1 Apr 2020 - 30 Sep 2020. (£50k FEC) Unsuccessful.
- Healey (PI) QMUL / UCL / CITY / Britsol. “Social Health” Bid to EPSRC Healthcare Techologies Call. Outline Stage. 1 Sep 2020 - 31 Aug 2025. (£7.3m FEC) Unsuccessful.

Conference Presentations:
- Healey, P.G.T. Theodorou, L. and Haddadi, H. (2019) "Social Health: Mapping the quality of social interactions in the wild" Invited Talk, Human-Like Computing Machine Intelligence Workshop (MI21-HLC), 30th June – 3rd July Cumberland Lodge, Windsor, UK.
- Healey P.G.T., Theodorou, L. and Haddadi, H. (2019) “The Dynamics of Hand Movements in Dialogue” 29th Meeting of the Society for Text and Discourse July 9th - July 11th, 2019 New York City, United States.

Publications:
- Healey P.G.T. (forthcoming) “Human-Like Communication” in Human-Like Machine Intelligence edited by Stephen Muggleton and Nick Chater. Oxford University Press.
- Healey, P.G.T., Theodorou, L., H ä nsel, K., Cavallaro, A. Tokarchuk, L., Haddadi, H. and Katevas, K. (in prep) “Hand Movements Signal Social Engagement”. For submission to Nature Human Behaviour.

 Keywords  Human-Like Communication, Social sensing
 Links Final report
Project competed and the final report was reviwed

HLC Kick-start project (Year 2)
Project TitleAttention guidance for multi-task displays using human-like cognitive assistants
 PI Dr Szonya Durant (Department of Psychology, Royal Holloway University of London)
 Other partners

Dr Kostas Stathis, Co-I, Department of Computer Science, Royal Holloway University of London
Benedict Wilkins, Main post graduate researcher, Department of Computer Science, Royal Holloway University of London
Emanuele Uliana, Post graduate researcher, Department of Computer Science, Royal Holloway University of London
Callum Woods, Post graduate researcher, Department of Psychology, Royal Holloway University of London

Summary of the project

Our research hypothesis is that human-like assistance in multiple display systems that model the user’s joint activity with the display, take into account the user’s attention in this context and establish a mutual understanding of the environment. Our aim is to provide a proof-of-concept prototype that can facilitate future experimental evaluations of these concepts. To build the prototype we will focus on interface functionality, modelling a single user interacting with the ΜΑΤΒ-II computer-based task supplied by NASA and designed to evaluate operator performance and workload. The background knowledge is governed by explicit guidelines and constraints.
(O1) establish the informative eye movement patterns that a computer system will use as input for recognizing the cognitive state of the operator in a real-world situation, as well as identify the background joint activity knowledge for the system to use in guiding attention and information to be displayed;
(O2) develop a situation recognition model where cognitive assistants will collaborate to provide guidance locally for individual displays and globally over the context of multiple displays as well;
(O3) deliver a multi-agent system environment that integrates intention and activity recognition in human-like cognitive assistants, eye-tracking techniques and domain specific knowledge representation to provide human-like situation recognition and assistance for the user in guiding their eye movements optimally across the display.

We reproduced MATB-II functionality in Python and we refer to this new system as ICU. The reason for developing MATB-II from scratch with ICU was that existing versions were not suitable for our purposes due to not being easily configurable or overly reliant on various Python libraries.
ICU allowed us to do the following.
- Easily add eye tracking -- we made use of the PsychoPy library which allows easy calibration and communication with a wide range of eye trackers. Our system was tested with a laptop screen based Tobii- x2-30 sampling at 30-40Hz.
- Add extra flexibility in scheduling of events -- the NASA issued MATB-II package requires and xml file as input with all events pre-defined, our system allows events to be configured based on probability distributions over time for easier experimental control.
- Allow agents to have access to events -- all events produced by the system can be accessed in real time, in our case for agents to use for decision making and provide guidance.
- Allow additional overlays to be added to the display -- our system allows for simple additional overlays in the form of highlighting of areas and arrows.
- Allow events (in term of extra overlays) to occur in response or gaze contingent manner -- with the help of agents we are able to deploy the highlights in response eye position in real time.
The combination of ICU with agents monitoring and controlling events form a more sophisticated system that we refer to as ICUa.

The ICU has been realised and distributed via pip, allowing researchers to alter MATB2 to their own needs, more flexibly, incorporating eye tracking and add overlays. ICUa is the extended version of the ICU with agents available for download from GitHub.
Results from the simulated user have confirmed that given some simple assumptions on eye movement behavior and related actions the system functions in terms of guiding attention, i.e. performance improves for simple simulated observers when agents deploy attentional guidance.
We have set up a webpage containing information about the project and links to following downloads.
ICU is available for download from the webpage https://dicelab-rhul.github.io/ICU/ 

In preparation: paper for ETRA ACM conference
The system will be tested on humans to measure the effects on human performance as soon as COVID measures allow.
This will form the basis of an article to be submitted to IEEE Transactions on Human-Machine Systems.

 Keywords  human-like assistance, cognitive assistants
 Links Final report.  Project webpage https://dicelab-rhul.github.io/ICU/ 
Project competed and the final report under review
 
HLC Kick-start project (Year 2)
Project TitleToward Human-Machine Virtual Bargaining
 PI Prof Alan Bundy (University of Edinburgh)
 Other partners

Prof Nick Chater (University of Warwick) and Prof Stephen Muggleton (Imperial College) - CoIs
Eugene Philalithis (University of Edinburgh) - PostDoc

Summary of the project

This 9-month HLC Kick-Start project has aimed to connect recent discoveries in the study of human coordination with current work on logical theory inference and (in particular) automated theory repair. The empirical work forming the backdrop of this project posits a highly efficient, reasoning-driven cognitive process (‘virtual bargaining’) for spontaneous creation and update of signalling conventions in low-bandwidth contexts [1, 2]. No algorithm is specified in this literature, but modelling constraints can be extracted from both the experimental designs and observed behaviour of human participants. Collectively, these constraints (e.g. knowledge-based inference, vocabulary adaptation, cross-task transfer and limited sampling) argue in favour of logical reasoning, as opposed to sampling-based inference. This approach was demonstrated in our use of automated theory repair to update basic low-bandwidth signals in the select/avoid coordination game [3], where a Receiver guides a Sender to select or avoid, through spontaneous signals requiring inference over both player’s perspectives to interpret [1]. With this project, we extend the links between low-bandwidth signalling conventions and logical inference, to motivate an interdisciplinary research programme for replicating this behaviour. Despite challenges to our team from the COVID-19 crisis, this work has produced: (a) proof of concept application of the ABC system [4] on deterministic signal creation and update in select/avoid, including mixed-knowledge contexts;(b) two follow-up grant proposals with HLC Network+ members.

Follow-up grants and publication
- EPSRC Proposal: ‘Virtual convention learning’ To further explore this domain, we co-authored and submitted an EPSRC responsive-mode grant proposal led by Imperial College London (Stephen Muggleton (Lead), Alan Bundy and Nick Chater PIs) to explore creating and adapting signalling conventions through a mix of reasoning and learning. If funded, this 3.5-year grant will employ two RAs, based in Imperial College London and Edinburgh.
- ESRC Proposal: ‘Spontaneous adaptation in low-bandwidth signalling’. We aim to submit the relevant proposal as an ESRC responsive-mode grant in the second quarter of 2021. If funded, this 3-year grant will employ a single RA based in Edinburgh.
- Journal Article: ‘Automated theory Inference and spontaneous convention repair’ A manuscript covering the potential of automated theory inference for the modelling of convention repair in low-bandwidth settings is currently in preparation, drawing on the work from this Kick-Start.

References:
[1] Misyak, J. B., Noguchi, T. & Chater, N. (2016). Instantaneous conventions: The emergence of flexible communicative signals. Psychological Science, 27(12), 1550–1561.
[2] Misyak, J. & Chater, N. (2017). The spontaneous creation of systems of conventions. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK, 16-29 July 2017.
[3] Bundy, A., Philalithis, E. & Li, X. (2021). Modelling virtual bargaining using logical representation change. In S. Muggleton & N. Chater (Eds.), Human-like machine intelligence. OUP, Oxford.
[4] Li, X., Bundy, A. & Smaill, A. (2018). ABC repair system for datalog-like theories. In J. Bernardino, A. Salgado & J. Filipe (Eds.), Proceedings of 10th International Conference on Knowledge Engineering and Ontology Development (pp. 333–340). SCITEPRESS.

 Keywords  virtual bargaining, representation change
 Links Final report
Project competed and the final report under review