Which way forward?


With the advent of technology that can learn and change itself, and the integration of vast data sources tracking every detail of human lives, engineering now entails decision-making with complex moral implications and global impact.  As part of daily practice, technologists face values-laden tensions concerning privacy, justice, transparency, wellbeing, human rights, and questions that strike at the very nature of what it is to be human.

We recently edited a Special Issue of IEEE Transaction on Technology and Society about “After Covid-19: Crises, Ethics, and Socio-Technical Change”

"Our research works to understand the paths toward a future in which technology benefits all of humankind and the planet. We collaborate with social scientists to develop practical methods and socio-technical solutions to equip engineers and designers with the tools necessary for practicing responsibly through every step of the development process. "


Responsible Tech Design Library

Find out more about tools and methods for more ethical practice in technology design


Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Sadek M, Calvo R, Mougenot C, 2023,

    Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes

    , AI and Ethics, ISSN: 2730-5961
  • Journal article
    Sadek M, Calvo R, Mougenot C, 2023,

    Co-designing conversational agents: A comprehensive review and recommendations for best practices

    , Design Studies, ISSN: 0142-694X
  • Conference paper
    Espinoza Lau-Choleon F, Cook D, Butler C, Calvo Ret al., 2023,

    Supporting dementia caregivers in Peru through chatbots: generative AI vs structured conversations

    , 36th International BCS Human-Computer Interaction Conference 36th International BCS Human-Computer Interaction Conference Human-Computer Interaction Conference, Publisher: Association for Computing Machinery (ACM)

    In Peru, dementia caregivers face burnout, depression, stress, and financial strain. Addressing their needs involves tackling the intricacies of caregiving and managing emotional burdens. Chatbots can serve as a viable support mechanism in regions with limited resources. This study delves into the perceptions of dementia caregivers in Peru regarding a chatbot tailored to offer care navigation andemotional support. We divided the study into three phases: the initial stage encompassed engaging stakeholders to define design requirements for the chatbot; the second stage focused on the creation of ‘Ana’, a chatbot for dementia caregivers; and the final stage assessed the chatbot through interviews and a caregiver satisfaction survey. ‘Ana’ was tested in two configurations - oneemployed pre-defined conversation patterns, while the other harnessed generative AI for more dynamic responses. The findings reveal that caregivers seek immediate access to information on handling behavioural symptoms and a platform for emotional release. Moreover, participantspreferred the generative AI alternative of Ana, as it was perceived to be more empathic and human-like. The participants valued the generative approach despite knowing the potential risk of receiving inaccurate information.

  • Conference paper
    Widjaya MA, Bermudez J, Moradbakhti L, Calvo Ret al., 2023,

    Drivers of trust in generative AI-powered voice assistants: the role of references

    , 36th International BCS Human-Computer Interaction Conference

    The boom in generative artificial intelligence (AI) and continuing growth of Voice Assistants (VAs) suggests their trajectories will converge. This conjecture aligns with the development of AI-driven conversational agents, aiming to utilise advance natural language processing (NLP) methods to enhance the capabilities of voice assistants. However, design guidelines for VAs prioritise maximum efficiency by advocating for the use of concise answers. This poses a conflict with the challenges around generative AI, such as inaccuracies and misinterpretation, as shorter responses may not adequately provide users with meaningful information. AI-VA systems can adapt drivers of trust formation, such as references and authorship, to improve credibility. A better understanding of user behaviour when using the system is needed to develop revised design recommendations for AI-powered VA systems. This paper reports an online survey of 256 participants residing in the U.K. and nine follow-up interviews, where user behaviour is investigated to identify drivers of trust in the context of obtaining digital information from a generative AI-based VA system. Adding references is promising as a tool for increasing trust in systems producing text, yet we found no evidence that the inclusion of references in a VA response contributed towards the perceived reliability or trust towards the system. We examine further variables driving user trust in AI-powered VA systems.

  • Conference paper
    Sadek M, Calvo RA, Mougenot C, 2023,

    Trends, challenges and processes in conversational agent design: exploring practitioners’ views through semi-structured interviews

    , CUI '23: ACM conference on Conversational User Interfaces, Publisher: ACM, Pages: 1-10

    The aim of this study is to explore the challenges and experiences of conversational agent (CA) practitioners in order to highlight their practical needs and bring them into consideration within the scholarly sphere. A range of data scientists, conversational designers, executive managers and researchers shared their opinions and experiences through semi-structured interviews. They were asked about emerging trends, the challenges they face, and the design processes they follow when creating CAs. In terms of trends, findings included mixed feelings regarding no-code solutions and a desire for a separation of roles. The challenges mentioned included a lack of socio-technical tools and conversational archetypes. Finally, practitioners followed different design processes and did not use the design processes described in the academic literature. These findings were analyzed to establish links between practitioners’ insights and discussions in related literature. The goal of this analysis is to highlight research-practice gaps by synthesising five practitioner needs that are not currently being met. By highlighting these research-practice gaps and foregrounding the challenges and experiences of CA practitioners, we can begin to understand the extent to which emerging literature is influencing industrial settings and where more research is needed to better support CA practitioners in their work.

  • Book chapter
    Peters D, Calvo RA, 2023,

    Self-Determination Theory and Technology Design

    , The Oxford Handbook of Self-Determination Theory, Editors: Ryan, Publisher: Oxford University Press, ISBN: 9780197600047
  • Conference paper
    Ballou N, Deterding S, Tyack A, Mekler ED, Calvo RA, Peters D, Villalobos Zúñiga G, Türkay Set al., 2022,

    Self-determination theory in HCI: shaping a research agenda

    , New York, CHI Conference on Human Factors in Computing Systems (CHI ’22), Publisher: ACM, Pages: 1-6

    Self-determination theory (SDT) has become one of the most frequently used and well-validated theories used in HCI research, modelling the relation of basic psychological needs, intrinsic motivation, positive experience and wellbeing. This makes it a prime candidate for a ‘motor theme’ driving more integrated, systematic, theory-guided research. However, its use in HCI has remained superficial and disjointed across various application domains like games, health and wellbeing, or learning. This workshop therefore convenes researchers across HCI to co-create a research agenda on how SDT-informed HCI research can maximise its progress in the coming years.

  • Journal article
    Porat T, Burnell R, Calvo R, Ford E, Paudyal P, Baxter W, Parush Aet al., 2021,

    'Vaccine Passports’ may backfire: findings from a cross-sectional study in the UK and Israel on willingness to vaccinate against Covid-19

    , Vaccines, Vol: 9, Pages: 1-11, ISSN: 2076-393X

    Domestic “vaccine passports” are being implemented across the world, as a way ofincreasing vaccinated people’s freedom of movement and to encourage vaccination. However, thesevaccine passports may affect people’s vaccination decisions in unintended and undesirable ways.This cross-sectional study investigated whether people’s willingness and motivation to getvaccinated relate to their psychological needs (autonomy, competence and relatedness), and howvaccine passports might affect these needs. Across two countries and 1358 participants we foundthat need frustration – particularly autonomy frustration – was associated with lower willingnessto vaccinate and with a shift from self-determined to external motivation. In Israel (a country withvaccine passports), people reported greater autonomy frustration than in the UK (a country withoutvaccine passports). Our findings suggest that control measures, such as domestic vaccine passportsmay have detrimental effects on people’s autonomy, motivation, and willingness to get vaccinated.Policies should strive to achieve a highly vaccinated population by supporting individuals’autonomous motivation to be vaccinated and using messages of autonomy and relatedness, ratherthan applying pressure and external controls.

  • Conference paper
    Pillai AG, Kocaballi AB, Leong TW, Calvo RA, Parvin N, Shilton K, Waycott J, Fiesler C, Havens JC, Ahmadpour Net al., 2021,

    Co-designing Resources for Ethics Education in HCI

    , CHI Conference on Human Factors in Computing Systems, Publisher: ASSOC COMPUTING MACHINERY
  • Book chapter
    Calvo R, Peters D, Vold K, Ryan Ret al., 2020,

    Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

    , Ethics of Digital Well-Being: A Multidisciplinary Approach, Editors: Burr, Floridi, Publisher: Springer, Cham, Pages: 31-54, ISBN: 978-3-030-50585-1

    Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: (1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1296&limit=10&respub-action=search.html Current Millis: 1701798017933 Current Time: Tue Dec 05 17:40:17 GMT 2023