Imperial College London

Cosmin Badea

Central FacultyCentre for Languages, Culture and Communication

Part Time Lecturer in Philosophy
 
 
 
//

Contact

 

cosmin.badea10 Website CV

 
 
//

Location

 

306Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

9 results found

Seeamber R, Badea C, 2023, If Our Aim Is to Build Morality into an Artificial Agent, How Might We Begin to Go about Doing So?, IEEE Intelligent Systems, Vol: 38, Pages: 35-41, ISSN: 1541-1672

As AI becomes pervasive in most fields, from health care to autonomous driving, it is essential that we find successful ways of building morality into our machines, especially for decision making. However, the question of what it means to be moral is still debated, particularly in the context of AI. In this article, we highlight the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges. We also discuss the top-down and bottom-up approaches to design and the role of emotion and sentience in morality. We then propose solutions, including a hybrid approach to design and a hierarchical approach to combining moral paradigms. We emphasize how governance and policy are becoming ever more critical in AI ethics and in ensuring that the tasks we set for moral agents are attainable, that ethical behavior is achieved, and that we obtain good AI.

Journal article

Bolton WJ, Badea C, Georgiou P, Holmes A, Rawson TMet al., 2022, Developing moral AI to support decision-making about antimicrobial use, NATURE MACHINE INTELLIGENCE, Vol: 4, Pages: 912-915

Journal article

Post B, Badea C, Faisal A, Brett Set al., 2022, Breaking bad news in the era of artificial intelligence and algorithmic medicine: an exploration of disclosure and its ethical justification using the hedonic calculus, AI and Ethics, ISSN: 2730-5961

An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has become a key desirable with the increasingly widespread deployment of this technology. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual. However, the addition of these technologies to patient–clinician interactions, as with any complex human interaction, has potential pitfalls. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the eighteenth century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.

Journal article

Bolton W, Badea C, Georgiou P, Holmes A, Rawson Tet al., 2022, Developing Moral AI to Support Antimicrobial Decision Making, Nature Machine Intelligence, ISSN: 2522-5839

Journal article

Post B, Badea C, Faisal A, Brett Set al., 2022, Breaking Bad News in the Era of Artificial Intelligence and Algorithmic Medicine: An Exploration of Disclosure and its Ethical Justification using the Hedonic Calculus

An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has becomea key desirable with the increasingly widespread deployment of this technology. Advances in AI hold thepromise of improving the precision of outcome prediction at the level of the individual. However, theaddition of these technologies to patient-clinician interactions, as with any complex human interaction,has potential pitfalls. While physicians have always had to carefully consider the ethical background andimplications of their actions, detailed deliberations around fast-moving technological progress may nothave kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news(likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developedin the 18th century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI.We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustivedomains, whether an AI-supported action can be morally justified.

Working paper

Badea C, 2022, Have a Break from Making Decisions, Have a MARS: The Multi-valued Action Reasoning System, ARTIFICIAL INTELLIGENCE XXXIX, AI 2022, Vol: 13652, Pages: 359-366, ISSN: 0302-9743

Journal article

Badea C, Artus G, 2022, Morality, Machines, and the Interpretation Problem: A Value-based, Wittgensteinian Approach to Building Moral Agents, ARTIFICIAL INTELLIGENCE XXXIX, AI 2022, Vol: 13652, Pages: 124-137, ISSN: 0302-9743

Journal article

Badea C, Gilpin L, 2021, Establishing Meta-Decision-Making for AI: An Ontology of Relevance, Representation and Reasoning, The AAAI-21 Fall Symposium on Cognitive Systems for Anticipatory Thinking - 3rd Wave Autonomy

Conference paper

Hindocha S, Badea C, 2021, Moral exemplars for the virtuous machine: the clinician’s role in ethical artificial intelligence for healthcare, AI and Ethics, ISSN: 2730-5953

<jats:title>Abstract</jats:title><jats:p>Artificial Intelligence (AI) continues to pervade several aspects of healthcare with pace and scale. The need for an ethical framework in AI to address this has long been recognized, but to date most efforts have delivered only high-level principles and value statements. Herein, we explain the need for an ethical framework in healthcare AI, the different moral theories that may serve as its basis, the rationale for why we believe this should be built around virtue ethics, and explore this in the context of five key ethical concerns for the introduction of AI in healthcare. Some existing work has suggested that AI may replace clinicians. We argue to the contrary, that the clinician will not be replaced, nor their role attenuated. Rather, they will be integral to the responsible design, deployment, and regulation of AI in healthcare, acting as the moral exemplar for the virtuous machine. We collate relevant points from the literature and formulate our own to present a coherent argument for the central role of clinicians in ethical AI and propose ideas to help advance efforts to employ ML-based solutions within healthcare. Finally, we highlight the responsibility of not only clinicians, but also data scientists, tech companies, ethicists, and regulators to act virtuously in realising the vision of ethical and accountable AI in healthcare.</jats:p>

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00654222&limit=30&person=true