This is an old revision of the document!


De-identification is the masking, manipulation or removal of personal data with the aim to make individuals in a dataset less easy to identify. It is especially important when you want to share, publish or archive your dataset. Before sharing, publishing or archiving your data, you should determine whether it is possible to de-identify your dataset, while also keeping in mind its usability.

Pseudonymization

Pseudonymization is a de-identification procedure during which personally identifiable information is replaced by an unique alias or code (pseudonym). In general, the names and/or contact details of data subjects are stored with this pseudonym in a so-called keyfile. The keyfile enables the re-identification of individuals in the dataset. Keyfiles are stored separately from the rest of the data and access should be restricted. In contrast to an anonymized dataset, a pseudonymized dataset in principle still allows for the re-identification of data subjects.

Refer to our page on pseudonymization for practical advise on its implementation.

Anonymization

Anonymization is a de-identification procedure during which “personal data is altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party.“ (ISO 25237:2017 Health informatics -- Pseudonymization. ISO. 2017. p. 7.). In contrast to a pseudonymized dataset, an anonymized dataset does not allow for the re-identification of data subjects and is therefore no longer considered personal data.

There are several techniques that can make your dataset less identifiable. Check out possible techniques to de-identify your data below, but be aware that these techniques often affect its analytical value.

Removing or suppressing

Consider whether you can remove or suppress sensitive elements.

  • Remove variables that reveal rare personal attributes.
  • Remove direct identifiers, such as Patiënt ID.
  • Use restricted access to your data and only provide those variables to researchers that are necessary to answer their research question.

Replacing or masking

A practice in which you replace sensitive personal data with values or codes that are not sensitive:

  • Replace direct identifiers (‘name’) with a pseudonym (‘X’).
  • Make numerical values less precise.
  • Replace identifiable text with ‘[redacted]’.

Masking is typically partial, i.e. applied only to some characters in the attribute. For example, in the case of a postal code: change 9746DC into 97.

Aggregation & generalization

Reduce the level of detail of your dataset by generalizing variables, which makes it harder to identify individual subjects. This can be applied to both quantitative and qualitative datasets. For example, changing addresses in the neighborhood or city, and changing birth date or age into an age group.

Bottom- and top-coding

Bottom- and top-coding can be applied to datasets with unique extreme values. Set a maximum or minimum and recode all higher or lower values to that minimum or maximum. Replace values above or below a certain threshold with the same standard value. For instance, top-code the variable ‘income’ by setting all incomes over €100.000 to €100.000. This distorts the distribution, yet leaves a large part of the data intact.

Adding noise

Noise addition is usually combined with other anonymization techniques and is mostly (but not always) applied to quantitative datasets:

  • Add half a standard deviation to a variable.
  • Multiply a variable by a random number.
  • Blur photos and videos or alter voices in audio recordings.

Permutation

Permutation is applied to quantitative datasets. Shuffle the attributes in a table to link some of them artificially to different data subjects. The exact distribution per attribute of the dataset is hereby retained, but identification of data subjects is made more difficult.

Synthetic data

Synthetic data are artificially generated rather than collected from real-world events (e.g., flight simulators or audio synthesizers). In research, synthetic datasets can be designed to replicate the statistical patterns of real datasets that are too sensitive to share openly. Creating a synthetic version of your dataset allows researchers to:

  • Access relevant data without compromising the privacy or safety of data subjects.
  • Evaluate whether the dataset suits their research needs and begin developing code, refining models, and testing hypotheses.
  • Educate students on how to preprocess and analyze sensitive data—without exposing information about real individuals.

Watch this video for an accessible introduction to synthetic data

For more in-depth information on these techniques, including guarantees, common mistakes, and potential failures, please refer to (Chapter 3 of) the Opinion 05/2014 on Anonymisation Techniques (Working Party Article 29 of Directive 95/46/EC, 2014)