A critical crossroads for civil society in a complex context
The massive expansion of access to large language models (LLMs) since late 2022 is not just another wave. Even though artificial intelligence (AI) had already been around for decades, the public release of Chat GPT by OpenAI triggered an AI arms race between a small number of strategically positioned states and companies. Human rights lawyers and academics took almost a decade to react to the harms that social media platforms inflict on the public, including behavior modification, the dissemination of fake news, and election interference. Despite growing concerns, LLMs have not slowed in their shaping of the future. While social media platforms took years to get 1 million users, the new AI tools have set a new record of just days, and now only hours. There is ample evidence that AI has the potential to exacerbate inequalities, but rather than ignoring this revolution, nonprofits must responsibly embrace new technologies, including LLMs, while evaluating and fighting the risks. They face a critical crossroads: ignore AI or find ways to use it.
In their 2020 report, Jared Sheehan and Nathan Chappell found that 52% of US nonprofit professionals expressed fear of AI. Five years later, it is everywhere. Our phones, cars, and computers all have capabilities that were unknown until only recently. The nonprofit sector is no exception, as an increasing number of human rights organizations are already incorporating AI in their work to forecast displacement, track deforestation, monitor violence against women online, and carry out many other initiatives. However, nonprofits need to avoid building a digital divide between well-funded organizations that can afford to make use of the latest technologies and the rest of the human rights movement. One possible solution is to institute a robust legal framework governing the use of such tools.
The case for trust building
A significant proportion of civil society, particularly in the Global South, operates with limited resources. Historically, nonprofits have lagged behind developments in the private sector, and new technologies, as well as a changing political landscape, are only widening this gap. The accelerating integration of new technologies into the private and public sectors presents significant challenges to human rights, justice, and equality. Potential dangers include the risks of algorithmic bias, the erosion of privacy, and the amplification of inequality through unchecked technological systems. If the nonprofit sector does not adapt to these changes, it runs the risk of losing its agency, leaving vulnerable populations at risk of exclusion or harm.
The international community needs to facilitate this transformation, and nonprofits need to be able to trust these new systems. While legal frameworks for the ethical use of AI have the capacity to accelerate this change, global AI regulation remains a challenge as many countries struggle to balance innovation with human rights. In 2024, the European Union moved towards a comprehensive system of regulation with its EU AI Act. However, European countries have not done enough to mandate accessibility and access to AI platforms, and have also made exceptions for the use of tools like emotion and facial recognition in the law enforcement, immigration, and national security areas, all of which threaten human rights protections. The current US administration recently reversed years of regulatory progress with a single executive action, while at the global level, US Vice President J. D. Vance warned Europeans against continuing stringent regulation. Meanwhile, Australia faces criticism from the private sector for excessively expanding its AI regulations regarding children’s use of social media. Civil society seems to be left behind in all these decisions.
Regional (lack of) governance in Latin America
NGOs in Latin America face yet more uncertainty, as states in the region are far from a consensus on AI governance. The Organization of American States (OAS) has adopted important principles on neuroscience and human rights and held high-level meetings to move “towards the safe, secure, and trustworthy development and deployment of AI in the Americas.” However, it is unclear if the United States will move forward with the financial support it promised to assist these initiatives. In early 2025, the United Nations Economic Commission for Latin America and the Caribbean (ECLAC/CEPAL) continued discussions on AI governance based on human rights standards at a summit in Chile. Panelists highlighted the ethical, political, and economic challenges that AI poses for the region, with the lack of tailor-made AI governance for Latin America and the Caribbean at the forefront of debate. At the end of 2024, only six countries had adopted a national strategy on AI (Argentina, Brazil, Chile, Colombia, Perú, and the Dominican Republic). Ten countries are currently discussing new legislation that incorporates the risk analysis of the EU Act, and some of them include ethical and social responsibility frameworks.
While these activities represent a step forward, vulnerable groups and affected populations need to make their voices heard, and civil society organizations must raise awareness on this important issue within both the public and private sectors. There is wide acceptance that civil society, vulnerable communities, and nonprofits must step up their participation in both AI training, regulation, and usage. We need to build trust and find the right channels of communication between all stakeholders in order to move forward.
Embracing AI in the nonprofit sector
The rapid integration of AI into every facet of the nonprofit and the broader human rights world presents both unprecedented opportunities and profound challenges for the protection of human rights, and NGOs cannot avoid navigating this landscape. AI has already impacted several UN Sustainable Development Goals (SDGs), such as the rights to health, education, and a healthy environment. Doctors can predict skin cancer more accurately, students can have a personal tutor on almost any subject, and scientists can better map the melting of glacial lakes. Meanwhile, the Inter-American Court of Human Rights has included an AI digest for the public. AI can help a weakened civic society ecosystem thrive without abandoning the fight for better regulation and implementation, but to do this, stakeholders need to find a way to balance risk and reward. The challenge is to ensure that nonprofits do not fall behind when governments, private companies, and even organized crime groups are already taking advantage of the possibilities AI presents.
Nonprofits should embrace the positive aspects of AI, such as structuring data, translation, and transcription to increase their reach, reduce costs, and amplify their impact. AI tools could allow activists to avoid repetitive tasks like rifling through spam mail or filing documents and focus their time instead on the most impactful work before them – for instance, by spending more time in the communities they serve. Similarly, communication teams can use automation to dramatically reduce the time it takes to produce and disseminate informational material to the public by scheduling media posts or sending a weekly newsletter, and laborious tasks like transcription and translation can be sped up to save resources overall. Leaders in the nonprofit sector need to balance the need for efficiency with the mental health and well-being of their teams. On the other side of the equation, funders and international cooperation agencies must also take this challenge seriously. They can promote secure and responsible use of these systems by their grantees, which in turn will only increase their return on investment. As Heather Ashby points out, “good” and “bad” AI represent two sides of the same coin.