On January 15, the Dutch government was forced to resign amidst a scandal around its child-care benefits scheme. Systems that were meant to detect misuse of the benefits scheme, mistakenly labelled over 20,000 parents as fraudsters. More crucially, a disproportionate amount of those labelled as fraudsters had an immigration background.
Amongst the upheaval, little attention was brought to the fact that the tax authority was making use of algorithms to guide its decision-making. In a report by the Dutch Data Protection Authority, it became clear that a ‘self-learning’ algorithm was used to classify the benefit claims. Its role was to learn which claims had the highest risk of being false. The risk-classification model served as a first filter; officials then scrutinized the claims with the highest risk label. As it turns out, certain claims by parents with double citizenship were systematically identified by the algorithm as high-risk, and officials then hastily marked those claims as fraudulent.
It is difficult to identify what led the algorithm to such a biased output, and that is precisely one of the core problems. This blogpost argues that the Dutch scandal should serve as a cautionary lesson for agencies who want to make use of algorithmic enforcement tools and stresses the need for dedicated governance structures within such agencies to prevent missteps.
Continue reading “The Dutch benefits scandal: a cautionary tale for algorithmic enforcement”