Imperial College London

DrNicoPotyka

Faculty of EngineeringDepartment of Computing

Honorary Research Fellow
 
 
 
//

Contact

 

n.potyka Website

 
 
//

Location

 

Huxley BuildingSouth Kensington Campus

//

Summary

 

Overview

My research revolves around the questions how we can represent and reason about knowledge, how we can treat uncertainty and inconsistency, how we can guarantee robustness and correctness of autonomous systems, and how we can explain their decisions both faithfully and comprehensibly to the user. I believe that the answer requires hybrid systems that combine the transparency and verifiability of symbolic methods with the flexibility and efficiency of machine-learning methods.

Computational Argumentation

show research

Computational argumentation studies methods to represent and reason about arguments that naturally occur in online discussions, political debates or general decision problems. As opposed to the assumptions of classical logic, argumentation problems are naturally filled with contradicting arguments, so that arguments often cannot be declared as definitely true or definitely false, but rather as acceptable or non-acceptable. Argumentation formalisms can be roughly divided into structured approaches that take the logical structure of arguments into account and abstract approaches that abstract from the content of arguments and focus on their relationships. Furthermore, we can distinguish qualitative approaches that focus on identifying acceptable sets of arguments and quantitative approaches that quantify the acceptability of arguments.

Probabilistic Reasoning

show research

One of the most popular probabilistic reasoning approaches are Probabilistic graphical models, which represent random variables and their relationships in a graphical structure and exploit independencies in order to perform learning and inference more efficiently. Another interesting approach are Probabilistic logics that combine classical logic and probability theory to allow for automated reasoning with uncertain information that naturally occurs in applications like medical diagnosis or legal reasoning. The combination of the two allows describing reasoning problems more naturally than in pure probability theory (using logical formulas rather than abstract events or random variables) and more accurately than in pure logic (replacing the truth values 0 and 1 by the probability interval from 0 to 1). The area of Statistical relational artificial intelligence brings together ideas from probabilistic graphical models, probabilistic logics and logic programming.

Inconsistency Tolerance

show research

Both classical and probabilistic logic are brittle in the sense that contradictory information can render the consequences meaningless. There are various ways to overcome this problem. Repair-operators try to repair an inconsistent knowledge base while maintaining as much of the consistent information as possible. Another approach is to design inconsistency-tolerant reasoning approaches that can derive non-trivial results even if the knowledge base is inconsistent. Inconsistency measures allow quantifying the degree of inconsistency in order to make a more informed choice about the right tool.

Knowledge Graphs and Description Logics

show research

Knowledge graphs can be seen as simple databases that represent data in the form of (subject, predicate, object) triples. Popular examples include DBpedia and YAGO that contain hundreds of millions of facts that can be used in intelligent systems. Decription logics are formalisms that allow reasoning about the data in the knowledge graph in order to infer new information that is not explicitly stored. Knowledge graph embeddings aim at representing knowledge graphs as vectors (similar to how word embeddings represent words) and can be used as a standalone tools for plausible reasoning or to inject background knowledge into machine learning models. Ontology and rule embeddings refine standard knowledge graph embeddings by taking logical relationships into account that can improve the overall embedding.

Explainable AI

show research

Automatic decision making is increasingly driven by black-box machine learning models. However, the opaqueness of these models raises questions about fairness, reliability and safety. For example, research in adversarial machine learning demonstrated that black-box models can be brittle and minor changes in the inputs can result in catastrophically different outputs that can be a severe risk in safety-critical applications like autonomous driving. Similarly, the current black-box nature of many models makes it impossible to guarantee that the model did not learn sexist, racial or other undesirable biases. Explainable AI aims at making autonomous systems more transparent. For example, by designing systems that are interpretable or by making the mechanics of black-box models more transparent.