Imperial College London

Cosmin Badea

Faculty of EngineeringDepartment of Computing

Casual - Visiting lect, guest spkr, ext. examiner



cosmin.badea10 Website CV




306Huxley BuildingSouth Kensington Campus





Publication Type

5 results found

Post B, Badea C, Faisal A, Brett Set al., 2022, Breaking Bad News in the Era of Artificial Intelligence and Algorithmic Medicine: An Exploration of Disclosure and its Ethical Justification using the Hedonic Calculus

An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has becomea key desirable with the increasingly widespread deployment of this technology. Advances in AI hold thepromise of improving the precision of outcome prediction at the level of the individual. However, theaddition of these technologies to patient-clinician interactions, as with any complex human interaction,has potential pitfalls. While physicians have always had to carefully consider the ethical background andimplications of their actions, detailed deliberations around fast-moving technological progress may nothave kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news(likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developedin the 18th century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI.We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustivedomains, whether an AI-supported action can be morally justified.

Working paper

Badea C, Gilpin L, 2021, Establishing Meta-Decision-Making for AI: An Ontology of Relevance, Representation and Reasoning, The AAAI-21 Fall Symposium on Cognitive Systems for Anticipatory Thinking - 3rd Wave Autonomy

Conference paper

Hindocha S, Badea C, 2021, Moral exemplars for the virtuous machine: the clinician’s role in ethical artificial intelligence for healthcare, AI and Ethics, ISSN: 2730-5953

<jats:title>Abstract</jats:title><jats:p>Artificial Intelligence (AI) continues to pervade several aspects of healthcare with pace and scale. The need for an ethical framework in AI to address this has long been recognized, but to date most efforts have delivered only high-level principles and value statements. Herein, we explain the need for an ethical framework in healthcare AI, the different moral theories that may serve as its basis, the rationale for why we believe this should be built around virtue ethics, and explore this in the context of five key ethical concerns for the introduction of AI in healthcare. Some existing work has suggested that AI may replace clinicians. We argue to the contrary, that the clinician will not be replaced, nor their role attenuated. Rather, they will be integral to the responsible design, deployment, and regulation of AI in healthcare, acting as the moral exemplar for the virtuous machine. We collate relevant points from the literature and formulate our own to present a coherent argument for the central role of clinicians in ethical AI and propose ideas to help advance efforts to employ ML-based solutions within healthcare. Finally, we highlight the responsibility of not only clinicians, but also data scientists, tech companies, ethicists, and regulators to act virtuously in realising the vision of ethical and accountable AI in healthcare.</jats:p>

Journal article

Badea C, Artus G, Morality, Machines and the Interpretation Problem: A value-based, Wittgensteinian approach to building Moral Agents

We argue that the attempt to build morality into machines is subject to whatwe call the Interpretation problem, whereby any rule we give the machine isopen to infinite interpretation in ways that we might morally disapprove of,and that the interpretation problem in Artificial Intelligence is anillustration of Wittgenstein's general claim that no rule can contain thecriteria for its own application. Using games as an example, we attempt todefine the structure of normative spaces and argue that any rule-followingwithin a normative space is guided by values that are external to that spaceand which cannot themselves be represented as rules. In light of this problem,we analyse the types of mistakes an artificial moral agent could make and wemake suggestions about how to build morality into machines by getting them tointerpret the rules we give in accordance with these external values, throughexplicit moral reasoning and the presence of structured values, the adjustmentof causal power assigned to the agent and interaction with human agents, suchthat the machine develops a virtuous character and the impact of theinterpretation problem is minimised.

Journal article

Badea C, Have a break from making decisions, have a MARS: The Multi-valued Action Reasoning System

The Multi-valued Action Reasoning System (MARS) is an automated value-basedethical decision-making model for artificial agents (AI). Given a set ofavailable actions and an underlying moral paradigm, by employing MARS one canidentify the ethically preferred action. It can be used to implement and modeldifferent ethical theories, different moral paradigms, as well as combinationsof such, in the context of automated practical reasoning and normative decisionanalysis. It can also be used to model moral dilemmas and discover the moralparadigms that result in the desired outcomes therein. In this paper, we give acondensed description of MARS, explain its uses, and comparatively place it inthe existing literature.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00654222&limit=30&person=true