Landmark judgment from the Netherlands on digital welfare states and human rights

A landmark judgement in the Netherlands shows how technology used by governments to stop welfare fraud and improve “efficiency” may be leading to unjustified exclusion, discrimination, and stigma.




On February 5, the District Court of The Hague, stopped a government welfare fraud detection system in the Netherlands on human rights grounds. The demise of the system in question, which is called “System Risk Indication” (SyRI), is a clear victory for the plaintiffs and, especially, for the residents in targeted poor and marginalized neighborhoods. But this court decision is also likely to resonate far beyond Dutch borders.

In 2019, I set up a new project on digital welfare states and human rights at the Center for Human Rights and Global Justice at NYU Law. The explicit aim was not only to undertake original research on how the use of digital information and new technologies were transforming systems of social protection around the globe, but also to shine a spotlight on these developments and their human rights impact. Although others have pointed out that poorer and marginalized groups are very much at the frontlines of government digitization, this fact is often overlooked. The digital welfare state project thus aims to bridge the gap between the digital rights field and groups focusing on poverty, welfare rights, and economic and social rights.

The project also worked closely with the UN Special Rapporteur on extreme poverty and human rights on a recent report on digital welfare states to the UN General Assembly. The report provided an overview of global developments in this area and concluded that welfare fraud detection is one of the most prominent justifications invoked by governments for rapid digital innovation. Many of these technological developments are driven by neoliberal thinking that emphasizes the goals of reducing government welfare expenditure and enhancing “efficiency”.

SyRI is a textbook example of such a strategy. A 2012 report by the Ministry of Social Affairs and Employment characterized an earlier incarnation of SyRI as contributing to a “reduction of administrative costs”, enhancing the “smart and effective detection of possible welfare fraud”, and having a “preventative effect if it is known that the government is monitoring behind the scenes”.

Indeed, the 2014 SyRI legislation, now declared non-binding by the Dutch court, allows a potentially open-ended number of central and local public authorities to cooperate in concrete SyRI projects aimed at countering fraud in the areas of social benefits, taxation and labor law. This legislation allowed cooperating authorities to share up to 17 broadly defined categories of previously siloed data to predict who is likely to commit fraud. Projects are approved by the Minister who delegates the process of bringing together, pseudonymizing and analyzing the data with an algorithmic risk model to a private foundation, which is quite unfortunately called the Intelligence Agency (Stichting Inlichtingenbureau). After the removal of obvious errors, the Minister reports “high-risk individuals” to the cooperating authorities and includes their names in a “risk notification register”.

Neither the risk model, relevant risk indicators or the exact data used have been made available to the public or the court.

This is what we know about SyRI based on the relevant legislation, but the government has remained extremely secretive about its inner workings. Neither the risk model, relevant risk indicators or the exact data used have been made available to the public or the court. The State is also not required to inform “low- and high-risk” individuals whose data is analyzed of that fact, and the State is not required to actively inform those “high-risk” individuals whose name had been included in the risk register (para. 6.50-6.53).

After nearly a decade of pilot projects which operated without a specific legal basis, and despite harsh criticism from the data protection authority, legislation formalizing SyRI sailed through Parliament in 2014 without any meaningful debate. When recently asked about their involvement by a newspaper, several legislators had trouble remembering their involvement at all. This lack of scrutiny may have persuaded the court to undertake a comprehensive review of the SyRI legislation in light of the human right to privacy in Article 8 of the European Convention on Human Rights (ECHR) (para. 6.43).

Somewhat disappointingly, but in accordance with the margin of appreciation permitted under the ECHR, the court accepted the State’s claim that fraud detection is a “pressing social need”.  It nevertheless concluded that the SyRI legislation violates the right to privacy because it lacks a “fair balance” between its objectives and the infringement of that right. The court noted the government’s special responsibility for striking this balance because the case involves new technologies, introduced at lightning speed and with potentially very serious consequences for privacy rights.

The judgment is a pioneering example of resistance to digital welfare systems on human rights grounds.

The lack of transparency surrounding SyRI weighed especially heavily in the court’s analysis because it hampered the court’s ability to perform its judicial role and eroded the individual’s right to challenge this system. This lack of transparency is even more problematic, according to the court, because of the potentially discriminatory effects of SyRI (para. 6.91). SyRI has been used exclusively in so-called “problem neighborhoods” (para. 6.93). Referring to an amicus brief by the Special Rapporteur, prepared in close cooperation with my project, the court concludes that such targeting may lead to unjustified exclusion, discrimination and stigma (para. 6.91-6.92). In marked contrast, as pointed out in the amicus brief, comparable systems aimed at detecting tax fraud by middle- and higher income individuals in the Netherlands faced fierce opposition and a rapid demise.

The judgment is a pioneering example of resistance to digital welfare systems on human rights grounds, sets a strong legal precedent, and shows how effective human rights advocacy and litigation can bring concrete change. The original complaint in early 2018 was a relatively mainstream privacy action, led primarily by privacy groups and two prominent individuals who spoke out against the wider risks of government surveillance. But in late 2018 the Netherland’s biggest union got involved in the litigation and underlined the ever-harsher attitudes towards welfare beneficiaries. Subsequently, the October 2019 amicus brief prepared by the UN Special Rapporteur and the project I direct at NYU highlighted concerns about targeting, surveillance and discrimination in poor neighborhoods and the implications for the right to social security and to privacy.

The involvement of a big union and the UN not only changed the focus, but also the profile of the litigation. A local campaign by the union against SyRI in two poorer neighborhoods of Rotterdam led to a surge in media attention in the summer of 2019 and another spike occurred after the publication of the amicus brief. An interesting dynamic ensued between local and global media and advocacy attention, and recent statements by Human Rights Watch and Privacy International, as well as reporting in the New York Times, the Guardian and Wired, are evidence of that change.

SyRI shows that strategic litigation, a savvy campaign, cooperation between digital rights and welfare rights groups, the involvement of a UN mechanism and interaction between local and international activists and media can make a real difference. This should give those who work on digital government and human rights in other countries hope since the Netherlands is not unique in this area. In the wave of advocacy on digital welfare states and human rights that clearly lies ahead, this clear win is a good start.

 

ORIGINALLY PUBLISHED: March 19, 2020

Christiaan van Veen is the director of the Digital Welfare State and Human Rights Project & Special Advisor on new technologies and human rights to the UN Special Rapporteur on Extreme Poverty and Human Rights. He is based at the Center for Human Rights and Global Justice, New York University School of Law.

Twitter: @cpjvanveen


 

COMMENTS
Stay connected! Join our weekly newsletter to stay up-to-date on our newest content.  SUBSCRIBE