Uncovering discrimination in public fraud detection systems
In recent years, algorithmic systems used by the Dutch government for fraud detection in welfare, allowances, and student loans were found to be discriminatory, causing harm to citizens. The Childcare Benefits Scandal highlighted these issues, sparking political and societal debates, investigations, and reforms. This session will explore causes of discriminatory outcomes, why they went undetected, red flags in such systems, and steps governments and society can take to ensure fair use of public algorithms. Lessons learned as an AI expert within the Dutch government will be shared.
In recent years, several examples have come to light in the Netherlands of algorithmic systems developed and deployed by the government that were later found to be discriminatory. These systems, used to detect fraud in welfare benefits, allowances, and student loans, caused severe financial and emotional harm to citizens. The most devastating example of this was the Childcare Benefits Scandal.
Thanks to the efforts of investigative journalists, civil society organizations, auditors, and determined individuals, these injustices came to light. The systems became a focal point of political and societal debate, leading to investigations and the introduction of new legislation, policies, and tools to address the issues.
In this session, I would like to share some of the lessons I have learned as an AI expert within Dutch government. The following topics will be discussed:
- Causes of discriminatory outcomes: What are the main causes of discriminatory outcomes in public algorithmic fraud detection systems?
- Lack of Early Detection: How was it possible for these issues to remain unnoticed for so long?
- Red flags: What recurring patterns can be observed in these systems, and what signals indicate potential risks?
- Measures and Actions: What steps should governments take to prevent discrimination and other harms caused by public fraud detection algorithms? What can we, as a digital society, do to ensure the fairer use of public algorithmic systems?
It is becoming increasingly clear that not only in the Netherlands, but also in countries such as Australia, the United Kingdom, Denmark, and Sweden, similar public fraud detection systems are causing harm. The lessons shared in this presentation are therefore more broadly applicable.