Anonymising personal data ‘not enough to protect privacy’, shows new study

by

Photo of woman typing on laptop that says PERSONAL INFORMATION at the top

Could you be identified through 'anonymous' data?

Current methods for anonymising data leave individuals at risk of being re-identified, according to new UCLouvain and Imperial research.

Companies and governments downplay the risk of re-identification by arguing that the datasets they sell are always incomplete. Our findings show this might not help. Dr Yves-Alexandre de Montjoye Department of Computing/Data Science Institute

With the first large fines for breaching EU General Data Protection Regulation (GDPR) regulations upon us, and the UK government about to review GDPR guidelines, researchers have shown how even anonymised datasets can be traced back to individuals using machine learning, a type of artificial intelligence.

The researchers say their paper, published in Nature Communications, demonstrates that allowing data to be used - to train AI algorithms, for example - while preserving people’s privacy, requires much more than simply adding noise, sampling datasets, and other de-identification techniques.

They have also published a demonstration tool that allows people to understand just how likely they are to be traced, even if the dataset they are in is anonymised and just a small fraction of it shared.

The researchers say their findings should be a wake-up call for policymakers on tightening the rules for what constitutes truly anonymous data.

Wake-up call

The goal of anonymisation is to help use data to benefit society. This is extremely important but should not and does not have to happen at the expense of people’s privacy. Dr Yves-Alexandre de Montjoye Department of Computing/Data Science Institute

Companies and governments both routinely collect and use our personal data. The way our data is used is protected under relevant laws like GDPR or the US’s California Consumer Privacy Act (CCPA).

Data is ‘sampled’ and anonymised, which includes stripping the data of identifying characteristics like names and email addresses, so that individuals cannot, in theory, be identified. After this process, the data’s no longer subject to data protection regulations, so it can be freely used and sold to third parties like advertising companies and data brokers.

The new research shows that once bought, the data can often be reverse engineered using machine learning to re-identify individuals, despite the anonymisation techniques.

In the paper, 99.98 per cent of Americans were correctly re-identified in any available ‘anonymised’ dataset by using just 15 characteristics, including age, gender, and marital status.

Co-author Dr Luc Rocher of UCLouvain said: “While there might be a lot of people who are in their thirties, male, and living in New York City, far fewer of them were also born on 5 January, are driving a red sports car, and live with two kids (both girls) and one dog.”

This could expose sensitive information about personally identified

individuals, and allow buyers to build increasingly comprehensive personal profiles of individuals.

For example, re-identifying anonymised data is how New York Times journalists exposed Donald Trump’s 1985-94 tax returns in May 2019.

The research demonstrates for the first time how easily and accurately this can be done – even with incomplete datasets.

A demonstration

Alongside the paper, the researchers published a machine learning tool to evaluate the likelihood for an individual’s characteristics to be precise enough to describe only one person in a population of billions.

They also developed an online tool, which doesn’t save data and is for demonstration purposes only, to help people see which characteristics make them unique in datasets.

Screenshot of the demonstration tool website, showing how sampling can help anonymise data
The tool explaining how sampling can help anonymise data

The tool first asks you put in the first part of their post (UK) or ZIP (US) code, gender, and date of birth, before giving them a probability that their profile could be re-identified in any anonymised dataset.

It then asks your marital status, number of vehicles, house ownership status, and employment status, before recalculating. By adding more characteristics, the likelihood of a match to be correct dramatically increases.

Senior author Dr Yves-Alexandre de Montjoye, of Imperial’s Department of Computing, and Data Science Institute, said: “This is pretty standard information for companies to ask for. Although they are bound by GDPR guidelines, they’re free to sell the data to anyone once it’s anonymised. Our research shows just how easily – and how accurately – individuals can be traced once this happens.

He added: “Companies and governments downplay the risk of re-identification by arguing that the datasets they sell are always incomplete. Our findings show this might not help.

“The results demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for.”

Co-author Professor Julien Hendrickx from UCLouvain said: “We’re often assured that anonymisation will keep our personal information safe. Our paper shows that de-identification is nowhere near enough to protect the privacy of people’s data.”

The researchers say policymakers must do more to protect individuals from such attacks, which could have serious ramifications for careers as well as personal and financial lives.

Dr Hendrickx added: “It is essential for anonymisation standards to be robust and account for new threats like the one demonstrated in this paper.”

Dr de Montjoye said: “The goal of anonymisation is to help use data to benefit society. This is extremely important but should not and does not have to happen at the expense of people’s privacy.”

DISCLAIMER: The online demonstration tool does not save personal data and is for demonstration purposes only.

Estimating the success of re-identifications in incomplete datasets using generative models’, by Luc Rocher, Julien M. Hendrickx, & Yves-Alexandre de Montjoye. Published 23 July 2019 in Nature Communications.

See the press release of this article

Reporter

Caroline Brogan

Caroline Brogan
Communications Division

Click to expand or contract

Contact details

Tel: +44 (0)20 7594 3415
Email: caroline.brogan@imperial.ac.uk

Show all stories by this author

Tags:

Comms-strategy-Wider-society, Research, International, Security-science, Industry, Europe, 4IR, Artificial-intelligence, Big-data, Global-challenges-Data
See more tags

Comments

Comments are loading...

Leave a comment

Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.