Algorithm bias warning, as Vice-Provost advises parliamentary committee

by

Vice-Provost (Research and Enterprise) Professor Nick Jennings giving evidence to the Commons Science and Technology Committee

Vice-Provost (Research and Enterprise) Professor Nick Jennings giving evidence to the Commons Science and Technology Committee

Growing use of algorithms in decision-making risks disproportionate bias against certain groups, a new parliamentary report has warned.

Imperial’s Vice-Provost (Research and Enterprise) Professor Nick Jennings, a leading authority on artificial intelligence and machine learning, has been cited in the report, Algorithms in decision-making by the House of Commons Science and Technology Committee.

The report, published yesterday, highlights how the growth of big data and machine learning have hugely increased the role algorithmic decision-making plays in all sectors of the economy and every part of public life, from financial transactions to the criminal justice system. This has profound implications for transparency, privacy and citizens’ rights.

The report calls for the government’s Centre for Data Ethics & Innovation to examine algorithmic biases to ensure that the biases that can factor in human decision-making are not embedded into automatic decision-making processes.


Professor Nick Jennings, who is a Fellow of the Royal Academy of Engineering, gave oral evidence to a meeting of the committee in November 2017.

He explained to the inquiry that algorithms have been used to assist in decision-making for centuries and that their usage pre-dates computers. He further pointed out how “biased results” can be produced from “poorly trained algorithms”, which can arise when inappropriate “training data” – the data used to teach the algorithm to identify patterns and apply statistical rules – is used.

The report also outlines the need for transparency to ensure algorithm accountability or “the ability to challenge and scrutinise the decisions reached using algorithms”. It argues that by making algorithm coding more widely available, errors in algorithms are more likely to be corrected, leading to better automated decision-making.

This is a particular problem, according to Dr Pavel Klimov of the Law Society’s Technology and the Law Group, because the use of algorithms in decision-making can remove humans from the decision-making process and lead to us being unable to understand why a wrong decision has been taken.

Professor Jennings noted that transparency can raise issues too, with negative actors able to take advantage of “adversarial machine learning”, where they use their knowledge of how the algorithm works to “dupe it” and exploit its flaws.

Digital Secretary Matt Hancock at the AI Sector Deal launch
Digital Secretary Matt Hancock at the AI Sector Deal launch at the College in April 2018

The report comes after the government launched its £1bn AI Sector Deal at the College last month, where Business Secretary Greg Clark and Digital Secretary Matt Hancock announced funding for 1,000 new AI PhDs to keep the UK at the forefront of innovation.

The publication of the report marks the latest in a series of influential parliamentary reports related to artificial intelligence to feature contributions and citations from the Imperial community.

Last month the House of Lords Select Committee published its report on artificial intelligence, citing the Co-Director of the Institute for Security Science and Technology Professor Chris Hankin and Professor of Affective and Behavioural Computing Maja Pantic.

The College also responded to the government’s consultation on its Industrial Strategy Green Paper in April 2017.

Reporter

Tom Rutland

Tom Rutland
Communications and Public Affairs

Click to expand or contract

Contact details

Email: press.office@imperial.ac.uk
Show all stories by this author

Tags:

Strategy-collaboration, 4IR, Research, Big-data, Artificial-intelligence, Strategy-decision-makers, The-Forum
See more tags