My research is generally directed in three directions:
- making good decisions (automated reasoning, multiple-criteria decision-making, preference and priorities, and rule-based/symbolic AI)
- foundational issues (philosophical issues around possible failure modes of AI, ethics, meaning, and philosophy of language and of the mind) and
- the theory behind practical applications (building ethical ML for healthcare, ethical frameworks for automated medical decision-making).
2021 Badea, C., and Gilpin, L.H. Establishing Meta-Decision-Making for AI: An Ontology of Relevance, Representation and Reasoning.
2021 Hindocha, S., and Badea, C. Moral exemplars for the virtuous machine: the clinician’s role in ethical artificial intelligence for healthcare.
2021 Badea, C., and Artus, G. Morality, machines and the interpretation problem: A value-based, Wittgensteinian approach to building moral agents.
2021 Post, B., Badea, C. and Brett, S. Can Breaking Bad News be Justified. An investigation using the Felicific Calculus.
2020 Seeamber, R., and Badea, C. A three-tier hierarchy for building morality into artificial agents.
2020 Badea, C. Have a break from making decisions, have a MARS: The multi-valued action reasoning system.
2017 Badea, C., and Kuhn, L. The Multi-valued Action Reasoning System (MARS) - A value-based decision-making framework for practical reasoning in Machine Ethics.