Ethics, Privacy, AI in Society
As AI becomes more developed and successful, and is adopted more widely in industry and across society, the ethical, political, legal and philosophical issues raised by AI all become more pressing. Practitioners of AI should be aware of these issues. This module divides into three parts. One, on the ethics of AI, is about ethical and philosophical problems raised by AI—such as the alignment problem, the technological singularity, the attribution of responsibility for autonomous agents. The second, on fairness and bias in ML, will concern conceptions of algorithmic fairness and bias, the accuracy/bias trade-off, and practical approaches to these. The third, on law, will present the GDPR and its impact on AI that involves personal data, as a key illustrative example of an important law affecting AI/ML; related laws and regulation may be presented.
Upon completion of the module, students will be able to:
1. Evaluate the ethical implications of developments in AI with respect to underlying philosophical ideas.
2. Engage critically, in an informed fashion, with debates on AI safety and existential risk mitigation.
3. Detect algorithmic bias in machine learning decisions and measure it based on several common metrics.
4. Evaluate appropriate algorithmic fairness measures to address the bias depending on the task, choose among pre-, in-, or post-processing methods, and perform empirical analysis using appropriate libraries.
5. Analyze the social, ethical and legal (particularly data protection) barriers to the take-up of AI/ML technologies, including under the GDPR.
6. Assess the issues relevant to GDPR-compliant ML technology design and the consequences of non-compliance with legislation such as the GDPR.
The module consists of three parts.
1. Ethics in philosophy: utilitarianism and other approaches. Agency, moral agency, artificial agency. AGI. The singularity. The simulation hypothesis. The alignment problem. Existential risk. Autonomous agents and responsibility. Ethical issues in AI and regulation.
2. Definitions of fairness in ML. The evaluation of fairness metrics. Ways to enforce fairness in ML models. Representation learning: traditional and adversarial approaches. The analysis of bias in ML datasets, including use of fairness metrics for the same.
3. Outline of laws in practice, with focus on GDPR. Roles, terms, and risks under GDPR. Core principles of lawfulness, fairness and transparency. Data minimisation, purpose limitation, accuracy, integrity, storage limitation, confidentiality; the requirement for a “legal basis”. The application of these rules to AI/ML in practice. GDPR applied to different phases of ML use. Overview of selected other key laws relevant to ML.
The module consists of a combination of lectures, tutorials, lab-sessions, and an in-class test. All three parts will have lectures. The part on ethics and AI will be a mixture of lectures and tutorials; the tutorials will involve responding to readings and lecture content. The part on fairness in ML will have lectures and tutorials, and also lab sessions with Python exercises (unassessed). The part on law will have lectures and associated tutorials. Teaching will be supported by Q&A on an online forum, and TAs will assist with tutorials and marking.
The module is assessed solely by CW; there is no exam. There are three pieces of CW, one for each part of the module (ethics; fairness in ML; law). The ethics CW consists of written questions and answers on themes or readings discussed in the lectures and tutorials. The fairness in ML CW is a programming exercise on algorithmic fairness and bias, and accompanying report. The law CW is an in-class test requiring the analysis of example situations pertaining to privacy, GDPR, AI. The weighting of the three pieces of CW matches the weighting of parts within the module.
All three pieces of CW will receive individual feedback. Marks and feedback will be returned within 2 weeks.
Module leadersDr Kuan Hon
Dr Robert Craven
Dr Novi Quadrianto
Professor Michael Huth