Tech companies’ inability to control fake news exacerbates violent acts


The technological advances of the last decade have fundamentally changed the world in which we live: we are more connected and have greater access to information than ever before. But from the “fake-news”-influenced outcome of the 2016 US Presidential election to the genocide in Myanmar, we now know that new technologies come with grave risks.

The exponential growth of the ICT industry has had stark consequences in the form of human lives and livelihoods, usually of the world’s most vulnerable and marginalized populations—calling into question the industry’s “growth at all costs” approach to business. The Sri Lankan example offers a vivid and tragic depiction of this phenomenon.

Sri Lanka has a long and complex history of inter-communal violence. Nearly a decade after the end of its 25-year civil war, many root causes of the conflict remain unaddressed. This dynamic is exacerbated by the spread of false information on social networks like Twitter, Facebook, and WhatsApp.

Social media is being weaponized by extremists and inadvertently utilized as a megaphone for amplifying hate speech by everyday people, many of whom seem to lack an understanding of the offline effects of their online posts, message forwards, and re-tweets. In a country like Sri Lanka, where there is high adult literacy but extremely low digital literacy, people often trust, share, and act upon what they see on social media.

For example, earlier this year, Sri Lanka again descended into violence as online rumors spurred deadly attacks by members of the Buddhist majority against Muslims. Extremists used social media to call on people to take up arms against Muslims in response to rumors—also spread through social media—claiming that Muslims were plotting to wipe out the Buddhists.

Over the course of three days in March, mobs burned mosques, Muslim homes, and Muslim-owned shops. One man was burned to death. In response, the government temporarily blocked social media, including Facebook and two other social media platforms Facebook owns, WhatsApp and Instagram.

Notably, the division of “good guys” and “bad guys” is far from clear cut in this situation. Yes, there are highly sophisticated extremist groups that have long-used social media to spread hate. But the more pressing problem is the cycle of everyday, well-meaning people being provoked into hate-filled and violent acts through what they are being shown on social media.

This is compounded by the general lack of timely response by technology corporations. Despite repeated early warnings and flags of violent content, Facebook failed to delete offensive posts or take any sort of ameliorative action. It was only after Facebook’s services were blocked, officials said, that the company took notice. Even then, the company’s initial response was limited to the adoption of a voluntary internal policy whereby it would “downrank” false posts and work with third parties to identify posts for eventual removal.

Understanding the bigger problem

From data mining and spyware to troll bots and artificial intelligence, potentially dangerous uses of technology are growing exponentially and ICT companies’ responses to negative impacts are late, reactionary, and insufficient. With these new technologies spreading around the world and insufficient corporate responsibility for the negative ramifications of their use, we’re now faced with a myriad of unintended consequences from tools as simple as social media apps, including using social media to exacerbate, or even create, violent conflict.

In addition to potentially devastating loss of human life, these conflicts also cause economic stagnation and have the potential to destabilize entire regions. And when the conflict is traced back to the involvement of a particular company or industry, that company or industry’s consumer/user brand and reputation suffer. Consequently, it is in the best interest of all affected stakeholders, particularly private industry, to facilitate peace, not enable conflict.

Large multinational corporations—even those who support human rights and corporate social responsibility—don’t sufficiently prioritize mitigating these risks or take a conflict sensitive approach to business.

Today’s technology should not contribute to tomorrow’s conflict

Unfortunately, the conflagration that results where new social media technologies are introduced into complex settings without a conflict-sensitivity analysis is becoming commonplace. However, the issue is still not being addressed in a systematic way.

Although there are now calls to “#DeleteFacebook” and increasing pressure on large ICT companies to act more ethically, these efforts often overlook users’ interests and needs. Existing business and human rights initiatives do not address the specific issue of conflict sensitivity. Moreover, these efforts maintain a “top down” perspective addressing only the large multinational corporations. Companies and start-ups in emerging markets are largely ignored.

That is why JustPeace Labs (JPL) is working on a multi-stakeholder consortium that brings private industry (both Silicon Valley tech giants and developing country start-ups and incubators) to the table with government officials, academics, and public sector activists to ensure that companies are accountable to the communities in which they invest.

While there are a number of initiatives already in place to address human rights practices at ICT companies generally, some fairly robust company-specific CSR and human rights policies at leading ICT companies, and a couple IGO/NGO initiatives looking at best practices for corporate behavior in high-risk settings, we still lack a collaborative initiative tailored specifically to ICT companies doing business in high-risk settings.

JPL is trying to fill that gap. Heightened ethical standards and a system for their implementation and enforcement are uniquely necessary in this case so that these companies don’t inadvertently contribute to a resurgence of violence vis-à-vis the privacy, censorship, security and amplification of hate speech concerns outlined above.

Working collaboratively, the consortium would encourage the adoption of conflict-sensitive approaches to business, including:

  • Understanding conflict dynamics and how business operations might affect conflicts; and
  • Taking proactive approaches to avoid reinforcing negative conflict dynamics and taking advantage of opportunities to support peace.

Of course, there are a number of valid critiques of these types of public/private multi-stakeholder initiatives, namely that businesses simply won’t volunteer to take proactive measures to operate ethically in the absence of a legal requirement for them to do so. However, based on positive examples from other industries, there are numerous business opportunities for companies willing to be ethical leaders that can be leveraged for positive change. Among other things, these include:

Once working together toward an agreed upon set of ethical principles, public and private industry stakeholders can then collaboratively devise protocols for addressing the problems highlighted above before entering or operating in high-risk markets. Such protocols would be voluntary but would also include a mechanism for monitoring, evaluation, and ultimately, enforcement. Only then can these actors claim to be facilitating peace, rather than enabling conflict.