Event image

The lack of explainability in AI, e.g. in machine learning or recommender systems, is one of the most pressing issues in the field of late, especially given the ever-increasing integration of AI techniques into everyday systems used by experts and non-experts alike. The need for explainability arises for a number of reasons: an expert may require more transparency to justify outputs of an AI system, especially in safety-critical situations such as self-driving cars, while a non-expert may place more trust in an AI system providing basic explanations, for example for movies recommended by recommender systems.

This workshop will bring together doctoral, early-stage and experienced researchers working in all areas of AI where there is either a need for explainability or potential for providing explainability. The workshop will consist of invited talks, presentations from members of Imperial College London’s scientific community, and discussions regarding, amongst others, the format, purpose and identification of explanations in various AI settings. We welcome presentations of two kinds:

  • short (circa 15 minutes) presentations on ongoing work
  • long presentations (circa 30 minutes) on consolidated (e.g. published) work

For more information visit:
https://www.doc.ic.ac.uk/~afr114/explainAI/index.html.