A select group of thought leaders, experts and industry specialists gathered at Hub Culture ICEhouse at Davos 2022 to discuss delivering AI ethics. The event was held in partnership with Swiss Cognitive. Here are the key findings from the conversation.
We discussed some turning questions of artificial intelligence (AI) ethics. In particular, the group was asked to reach conclusions around three key questions:
- How do we decide on which values we embed into AI?
- What can we do to bring about global norms?
- What are concrete steps you intend to take to foster ethical AI?
The answers that we reached were:
Universality: AI ethics was identified as having universality as it is an issue involving, or being shared by, all people and things in the world.
Transparency: As a tool for collaboration, transparency is critical as AI evolves. Multiple experts agreed it is more important to be working towards progress openly than it is to be "getting everything right" privately.
Evolutionary not deterministic systems: Ethical AI needs to evolve with the changing demands and priorities of society, rather than being a set of rigid rules that are placed at one single point in time.
Literacy and education: Existing education around AI ethics requires reform and growth to ensure that more of the population becomes knowledgeable about AI ethics. It must include frameworks for thinking about this subject and skills across industries to support this endeavour. This is crucial to support effective discourse on the subject. This is not limited to AI, but it is vital for the future of technology as a whole.
Human-centricity: We are not just inventing the technology for abstract purposes, the goal is to empower human society, thus human-centricity becomes a critical aspect for consideration.
Inclusive representation and bias: Time and monetary investment is required to better understand people who are less powerful and under-represented, both in terms of their active voices in this subject, and their presence in data sets fuelling AI. Virtual reality was raised as a promising tool to support in reducing bias.
Disruption (break the system): Getting entrenched with a particular approach, or lack thereof, in AI, may cause both latency and cost implications to alter this course in the future, but is critical for long term progress. We may have to accept that some things will not work as well, at least initially, until we get through a transition point to a new plateau of innovation.
Accountability: Measurement and enforcement of ethics remains an issue as well as addressing whether the people making these decisions are the right people to be doing so. Regardless of whether an umbrella set of ethics ultimately gets adopted, laws are required to ensure its application. In some instances, application and structuring of ethical codes might sit better under local legal systems as opposed to a universal set of rules.
Unification: Coherency is needed; a siloed approach to AI ethics was not deemed promising as each aspect of AI ethics has multiple intersections with other aspects.