Imperial College London

Cosmin Badea

Faculty of EngineeringDepartment of Computing

Casual- Student demonstrator - lower rate
 
 
 
//

Contact

 

cosmin.badea10 Website

 
 
//

Location

 

306Huxley BuildingSouth Kensington Campus

//

Summary

 

ETHICS, PRIVACY, AI IN SOCIETY

rOLE

Course Leader


aims


"Overall, to give students the tools needed to reason and make decisions about the ethical, social and legal aspects of Artificial Intelligence.

More specifically, in three parts on (i) ethics in AI, (ii) algorithmic fairness in ML, and (iii) law and AI, as follows. (i) To present the basic ethical frameworks used in current approaches to ethics in AI, the methods used in designing artificial agents which conform to instances of such frameworks, and to equip the students with the skills of analysis needed to reason about ethical dilemmas in AI. (ii) To present ways of measuring and preventing biased decision making by ML models, and the accuracy/fairness trade-off; to give students the practical tools to define and measure the fairness of ML algorithms. (iii) To present the EU General Data Protection Regulation (GDPR) and its impact on AI that involves personal data, as a key illustrative example of an important law affecting AI/ML, given the risk of large potential fines/compensation claims under the GDPR in practice; other selected laws (e.g., on anti-discrimination) will also be highlighted."


Learning Outcomes

"Upon completion of this module students will be able to: 

  • Evaluate the ethical and social implications of developments in machine learning and artificial intelligence and critique the technology of autonomous systems. 
  • Incorporate ethical principles of the key ethical frameworks into the design of artificial agents, according to standard methodologies. 
  • Analyse the social, ethical and legal (particularly data protection) barriers to the take-up of AI/ML technologies, including under the GDPR. 
  • Assess the issues relevant to GDPR-compliant ML technology design and the consequences of non-compliance with legislation such as the GDPR. 
  • Detect algorithmic bias in machine learning decisions and measure it based on several common metrics. 
  • Reason about and apply the accuracy-fairness trade-off of machine learning models. 
  • Evaluate appropriate algorithmic fairness measures to address the bias depending on the task, choose among pre-, in-, or post-processing methods, and perform empirical analysis using appropriate libraries. "

Module syllabus


I lecture in the first part of the course, on the following content:

"Introduction:

  • aims, outcomes, requirements, expectations
  • what will be covered, structure of the course

PART 1: Ethics and AI (Spring Term weeks 1-6; 12 hrs total)

  • Motivating examples: self-driving cars, drones, data storage and usage, bias in ML algorithms.
  • Moral dilemmas (inc. the Trolley problem, Plato’s knife, the Samaritan Machine)
  • Background and brief history.
  • Ethical paradigms of relevance to AI: virtue ethics, consequentialism (inc. utilitarianism), deontology.
  • Practical reasoning and “doing the right thing”; engineering vs ethics.
  • Artificial agents and responsibility.
  • Types of artificial moral agents (amoral, implicit, explicit).
  • Explicit moral agents, rule-based approaches to ethics in AI, logic-based approaches.
  • Approaches to building moral agents; top-down vs bottom-up; explainability.
  • Building ethical paradigms into AI (selection from Anderson & Anderson, Perreira, Asimov’s rules in football-playing robot, “Moral dilemmas for self-driving cars” study-MIT Media Lab)."