Course description

Cause and effect relationships play a central role in how we understand the world around us and how we act upon it. Causal associations intend to be more than descriptions, or summaries of the observed data, and instead relate to the underlying data-generating processes. When recognised and mastered, such a mechanistic understanding allows one to infer what would happen under various hypothetical (counter-to-fact) scenarios going beyond the limited observations (i.e., data) afforded up to that point. For example, physicians are able to infer the likely effect of different medications on their patient cohort or legislators predict the consequences of new laws and regulations. This set of lectures formally introduces the study of causality from data through the lens of Pearl’s Causal Hierarchy (Pearl, 2009) in which distinctions between observations, interventions, and counterfactuals emerge from a collection of structural causal models (SCM). In practice, we can expect researchers to have partial knowledge of underlying mechanisms which motivates the problem of causal inference, i.e., making causal statements based on observations and partial knowledge of the SCM, for example encoded in a causal graph. The back-door criterion and the do-calculus are examples of graphical criteria to identify causal effects. More generally, this inferential problem: combining a causal query, data, and structural assumptions, can be shown to extend to several modern tasks including transfer learning (also known as transportability theory) and reinforcement learning. This course will go into substantial detail on the foundations of causal inference, algorithms and graphical criteria for the identification of causal effects, ultimately to present a general (causal) theory for data science and machine learning.

Course datesFridays May 5-12-19-26 + Jun 216:00-18:00 in IX5 (floor 5) Translation and Innovation Hub (I-HUB), White City Campus. Dr Alexis Bellot will speak for approximately 90 minutes in each session followed by time for discussion.

Please contact Cris Salvi (c.salvi@imperial.ac.uk) if you are interested in attending the workshop.

Alexis Bellot is a research scientist at DeepMind in London, UK. He was previously a postdoctoral scholar at Columbia University sponsored by Professor Elias Bareinboim. Prior to Columbia, he graduated with a PhD in Applied Mathematics from the University of Cambridge under the supervision of Professor Mihaela van der Schaar. Alexis works on the study of causality from data and its applications, with an emphasis on methods and theory that combine causality and machine learning to both improve the robustness of machine learning algorithms, and improve causal discovery and causal inference methods.

Course content

1. Foundations of Causal Inference ( Lecture 1 )
A model-based approach to data science, structural causal models, Pearl’s Causal hierarchy, causal graphs.
2. Structure Learning ( Lecture 2Recording 2 )
Statistical constraints implied by causal models and how to exploit them to recover (parts of) causal models.
3. Identification of Causal Effects ( Lecture 3Recording 3 )
Graphical conditions and assumptions for uniquely computing causal effects, do-calculus, partial identification.
4. Transportability ( Lecture 4Recording 4 )
Exploiting causal structure for transfer learning.
5. Reinforcement Learning ( Lecture 5 )
Exploiting causal structure for decision-making, soft interventions, policies.

Getting here