Putting human rights law at the core of debates on online political campaigning

To date, it’s been left to the tech companies to set limits on online political campaigning. Governments need to step in and to use human rights law as a framework for regulation.


By: Kate Jones
December 13, 2019

Available in:
Español | Français



Regulators have clearly struggled to keep up with the pace of change in online political campaigning. As technology has developed, the problems have become more evident: we are seeing disinformation and divisive content; exploitation of algorithms and use of bots, cyborgs and fake accounts; micro-targeting on the basis of detailed personal data and psychological profiling; a polarising of debates by harnessing emotions such as anger and disgust and playing to identity politics.

We don’t yet know where the boundaries of legitimacy should be in this field, but it’s clear that we need to set them.  Otherwise the wild west of online political campaigning threatens to damage our democracies irreparably. 

We should be turning to international human rights law to set those boundaries.  Human rights law includes careful calibrations designed to protect individuals from the abuse of power by authority. Moreover, it provides a normative framework that can enable us to draw clear boundaries for legitimate digital campaigning, based on 70 years of considered reflection on how best to weigh competing considerations. Although not binding on companies, all businesses have a responsibility to respect it.

It’s legislators, not companies, who should determine regulation and policy in the public interest. 

Freedom of expression is vital, and it’s clearly not being respected in countries where people are not free to express their political opinions online.  It is absolutely unacceptable for governments or others to shut down the internet or censor points of view.  But freedom of expression doesn’t legitimate unbounded hate speech, disinformation with the intention of causing harm, or micro-targeting to mislead and misinform.  We need to consider the full range of affected rights, including:

  • The right to participate in public affairs and to vote.  This right includes the right to engage in public debate. If politicians feel forced to stand down because of the scale of online threats and abuse they are facing, this right is not being respected. States and digital platforms should ensure an environment in which all can feel free to participate and to vote without undue fear.
  • The right to privacy.  This right includes the right to choose not to divulge your personal information, and a right to opt out of trading in and profiling on the basis of your personal data. At the moment we have no real alternative but to allow political parties to gather extensive information about each one of us and to use that to ‘micro-target’ us with messages designed to appeal to each of us. To make the right to privacy a real rather than a notional right, we need significant changes in how data is collected, shared and used, and we need to be able easily to check what data is held on us.  Notional ‘consents’ are not enough. 
  • The rights to freedom of thought and opinion.  These rights are critical to delimiting the appropriate boundary between legitimate influence and illegitimate manipulation.  Little attention has been paid to these rights up to now; but we’ve never before seen attempts to manipulate our political views on the scale now possible online. We need to review how digital platforms operate, to ensure that methods of online political discourse respect personal agency and prevent the use of sophisticated manipulative techniques.
  • The right to freedom of expression.  The rules on the boundaries of permissible content online should be set by governments, and should be consistent with this right. Platforms should be far more transparent in their content regulation policies and decision-making, and should develop frameworks enabling efficient, fair, consistent internal complaints and content monitoring processes with proper regard to international human rights law.

It’s legislators, not companies, who should determine regulation and policy in the public interest. Up to now, digital platforms have led responses themselves, for example in deciding to filter content, to make ads transparent and imprinted with funder details, or to ban political ads altogether. But digital platforms are not well placed to discern the public interest, and are necessarily guided by commercial considerations of their own. Regulators and policy-makers are now beginning to consider these issues. To enable them to do so, digital platforms and political campaigners must be far more transparent about what’s happening at the moment.

At the international level, the UN Secretary-General’s High-Level Panel on Digital Cooperation called for urgent examination of how human rights should guide digital technology and cooperation. The Advisory Committee of the United Nations Human Rights Council is now pursuing this topic. The Advisory Committee deserves our full support in this endeavour. As these issues are cross-jurisdictional, governments should work on them together as well as separately.  Overall, we need to make human rights real rather than notional in the domain of online political discourse.  

 


Kate Jones is a member of the University of Oxford’s Faculty of Law, and Director of its Diplomatic Studies Programme. She wrote the report Online Disinformation and Political Discourse: Applying a human rights framework, published by Chatham House.


 

COMMENTS