3 results found
Hindocha S, Badea C, 2021, Moral exemplars for the virtuous machine: the clinician’s role in ethical artificial intelligence for healthcare, AI and Ethics, ISSN: 2730-5953
<jats:title>Abstract</jats:title><jats:p>Artificial Intelligence (AI) continues to pervade several aspects of healthcare with pace and scale. The need for an ethical framework in AI to address this has long been recognized, but to date most efforts have delivered only high-level principles and value statements. Herein, we explain the need for an ethical framework in healthcare AI, the different moral theories that may serve as its basis, the rationale for why we believe this should be built around virtue ethics, and explore this in the context of five key ethical concerns for the introduction of AI in healthcare. Some existing work has suggested that AI may replace clinicians. We argue to the contrary, that the clinician will not be replaced, nor their role attenuated. Rather, they will be integral to the responsible design, deployment, and regulation of AI in healthcare, acting as the moral exemplar for the virtuous machine. We collate relevant points from the literature and formulate our own to present a coherent argument for the central role of clinicians in ethical AI and propose ideas to help advance efforts to employ ML-based solutions within healthcare. Finally, we highlight the responsibility of not only clinicians, but also data scientists, tech companies, ethicists, and regulators to act virtuously in realising the vision of ethical and accountable AI in healthcare.</jats:p>
Badea C, Artus G, Morality, Machines and the Interpretation Problem: A value-based, Wittgensteinian approach to building Moral Agents
We argue that the attempt to build morality into machines is subject to whatwe call the Interpretation problem, whereby any rule we give the machine isopen to infinite interpretation in ways that we might morally disapprove of,and that the interpretation problem in Artificial Intelligence is anillustration of Wittgenstein's general claim that no rule can contain thecriteria for its own application. Using games as an example, we attempt todefine the structure of normative spaces and argue that any rule-followingwithin a normative space is guided by values that are external to that spaceand which cannot themselves be represented as rules. In light of this problem,we analyse the types of mistakes an artificial moral agent could make and wemake suggestions about how to build morality into machines by getting them tointerpret the rules we give in accordance with these external values, throughexplicit moral reasoning and the presence of structured values, the adjustmentof causal power assigned to the agent and interaction with human agents, suchthat the machine develops a virtuous character and the impact of theinterpretation problem is minimised.
Badea C, Have a break from making decisions, have a MARS: The Multi-valued Action Reasoning System
The Multi-valued Action Reasoning System (MARS) is an automated value-basedethical decision-making model for artificial agents (AI). Given a set ofavailable actions and an underlying moral paradigm, by employing MARS one canidentify the ethically preferred action. It can be used to implement and modeldifferent ethical theories, different moral paradigms, as well as combinationsof such, in the context of automated practical reasoning and normative decisionanalysis. It can also be used to model moral dilemmas and discover the moralparadigms that result in the desired outcomes therein. In this paper, we give acondensed description of MARS, explain its uses, and comparatively place it inthe existing literature.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.