Online threats, real-world harms: Protecting human rights defenders

Credit: NASA / Unsplash

Earlier this year, Meta’s Oversight Board (the Board) overturned a decision by Facebook to leave up a post targeting a prominent Peruvian human rights defender (HRD) and member of the Coordinadora Nacional de Derechos Humanos, a national coalition of non-governmental organizations (NGOs) with over 40 years of experience. The image depicted her with blood on her face, suggesting a head wound. The caption accused Peruvian NGOs of “inciting violence” and misusing foreign funds. 

But what Meta originally dismissed as a harmless metaphor for a HRD with “blood on her hands” sparked a crucial lesson on why context—the who, what, and how—is key to ensuring that content is interpreted accurately, decisions are informed, and safety is platformed.

Details of the case

The post was published by the leader of La Resistencia, an extremist group well known for spreading disinformation and organizing physical and online attacks. Facebook initially decided the post did not violate its Violence and Incitement policy, describing the bloodied face of the HRD as metaphorical and labeling the content “political commentary.”

After the post went live, the Center for Justice and International Law (CEJIL) reported it to Facebook, warning that it posed a veiled threat in a volatile context where online attacks frequently fan physical violence. Other HRDs and an international organization member of Meta’s Trusted Partner Program supported the report. After Facebook’s decision to leave the post, CEJIL appealed. 

The Oversight Board is an independent body tasked with reviewing decisions made across Meta’s services, such as Facebook, Instagram, and Threads, and assessing whether those decisions align with Meta’s policies, values, and human rights commitments. The Board receives millions of requests and reviews only a small fraction of submissions, prioritizing cases that raise complex, systemic issues with global implications related to its strategic priorities. As a result, many serious but localized cases may never be addressed—making it ever more crucial that civil society takes action.

If an internal appeal process doesn’t address flagged content online, people who disagree with a decision from Meta’s services can submit their case to the Board. The Board will then review the situation, making a final and binding decision for Meta. It can also issue broader policy recommendations to improve the rules governing billions of users, known as Community Standards, which cover areas such as violence and incitement.

The Board concluded that the post was a veiled threat and should have been removed. Crucially, it emphasized that context is critical: who published the material, what the local dynamics are, and how such imagery functions in environments where threats against HRDs are common and often go unpunished.

This decision is a significant acknowledgment of the dangers HRDs face worldwide and of the accountability of digital platforms in either protecting or endangering them. This landmark case offers three key lessons: context matters, inaction is not neutral, and independent oversight backed by civil society can drive change.

Understanding the threat: Context is key

Peru is undergoing a profound crisis marked by the erosion of democratic institutions and growing authoritarianism. In this environment, HRDs play a key role in upholding fundamental freedoms and the constitutional order. But their work comes at a cost: intensifying online and physical threats, often from extremist groups emboldened by impunity.

Militant groups like La Resistencia routinely target HRDs with death threats, physical attacks, harassment, doxxing, and smear campaigns, including through the incendiary tactic of terruqueo—the false accusation of terrorism. The individual behind the post at the center of the Board’s decision has a documented record of threatening behavior and has been convicted of defamation against civil society organizations.

However, such attacks are often reinforced, not punished, by state actors. Physical and digital harassment is often the first step, a danger exacerbated by governments that seek to exercise greater control over civil society. In April 2025, the Peruvian Congress enacted a law to allow the government to arbitrarily sanction organizations receiving international funding. These initiatives have been condemned by UN special rapporteurs and governments for violating international norms. Yet similar measures have gained ground across Latin America, from Paraguay to El Salvador and Venezuela

Inaction isn’t neutral

Both states and digital platforms have a duty to prevent and respond to threats against HRDs—both offline and on. Under the UN Declaration on Human Rights Defenders and other international human rights standards, platforms must ensure that HRDs can safely express themselves and organize.

When threats go unchecked—whether by state actors or digital platforms—the result is not neutrality but potential complicity. Allowing threatening content to circulate legitimizes attacks, emboldens aggressors, and silences dissenting voices. It creates a chilling effect, undermining democracy. 

Independent oversight

That is why the Board’s determination is so crucial: it issued critical guidance for future cases to prevent harm to HRDs and affirmed the need for digital platforms to understand political and social dynamics before making enforcement decisions. While a common critique questions the Board’s autonomy because of its funding, its independence is not merely a promise; it is structurally and legally armored. Its governance is established through the Oversight Board Trust, an irrevocable legal entity designed to safeguard the Board’s independent judgment from Meta.

The Board’s decision sets an important precedent, as it also recommends that Facebook update its Community Standards to explicitly prohibit coded threats where the method of violence might not be clearly articulated. It also calls for an annual review of how such threats are handled, assessing Meta’s accuracy in identifying potential veiled threats and mandating a specific focus on content containing threats against HRDs that incorrectly remains up on the platform, as well as instances of political speech that have incorrectly been taken down.

Ultimately, this case underscores how freedom of expression and accountability are deeply interconnected. When oversight is informed by local realities and supported by civil society, it can reshape the rules of engagement online and help protect those most at risk.

The Board’s ruling in this Peruvian case does not resolve all the challenges HRDs face online, but it does highlight a way forward. It shows that a platform’s inaction carries real risks, that context must be central in decisions to remove content, and that independent oversight can make a difference in protecting HRDs against digital threats.