How can AI amplify civic freedoms?
Civil society must improve its knowledge and use of artificial intelligence in order to limit exploitation and protect and promote civic freedoms.
“Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” - Vladimir Putin
The future of democracy is entangled with artificial intelligence (AI). How international, national, and individual actors interact with the development of AI technology and policy will influence how it restricts or amplifies civic freedom. We hear the promise of “AI for Good” but presently AI is more likely being used to undermine civic freedom by restricting the freedom of expression, the freedom of assembly, and the freedom of association.
China, for example, curbs the freedom of expression by using AI to find and block social media posts and websites that support the #MeToo Movement. Similarly, several countries, including Qatar and Kuwait, use the AI-based “Netsweeper” application to scan and block LGBTQIA content.
In terms of freedom of assembly, predictive policing allows police to disrupt peaceful protests before they begin. When demonstrations do occur, facial recognition enables police to identify protesters so that they can be detained and questioned.
AI-powered data analysis is used by governments to process vast amounts of information on civil society organizations and individuals applying to register civil society organizations. Governments can use that information to limit freedom of association by arbitrarily granting or blocking registration based on certain characteristics, like political or religious affiliation. AI can also be used to undermine the operations of civil society organizations by blocking websites of political opposition and human rights groups. In Bahrain, for example, the Bahrain Center for Human Rights’ website was blocked in 2013 after it published a report linking senior government officials with the “Bandargate Scandal” and attempts to unfairly influence parliamentary elections.
AI has the potential to protect civic space
While AI can certainly have a positive role in protecting civic space, civil society’s current role in AI development and use is limited. To address these issues, the International Center for Not-for-Profit Law (ICNL) is building an initiative to ensure that promoting civic freedom is a key consideration in the development of AI technology and policy. The initiative includes: (1) developing international norms; (2) improving domestic policies and laws; (3) enhancing AI fluency; and (4) utilizing AI for good.
Development of International Norms.
Our experience shows that international norms and best practices can provide guidance to domestic policymakers to develop AI policies that protect and promote civic freedoms. Many norm development initiatives relating to AI issues are underway, from UN Special Rapporteur David Kaye’s recent report to the Toronto Declaration. The UN and other international multi-stakeholder initiatives, like the Freedom Online Coalition and Internet Governance Forum, should continue to support the development of international norms on digital issues, including AI, so that the technology is not used to restrict civic freedoms. The UN should also enhance the capacity of the Office of the High Commissioner for Human Rights (OHCHR) to provide assistance to Special Rapporteurs on AI and other digital issues. In addition, all initiatives—whether associated with the UN or otherwise—should prioritize engagement with civil society and the public, with a particularly focus on marginalized communities.
National and Local Policies and Laws on AI.
Canada, Mexico, India, Finland, Australia and several others have developed or are in the process of developing national strategies on AI. These strategies consider, among other issues, the types of AI projects that will be developed and carried out, and the allocation of resources for AI. Some strategies, like a proposed law in the US, would link AI to national security and defense without adequate consideration of key human rights issues, such as protecting and promoting civic freedom. The neglect of human rights issues in national AI policies can result in policies focused on the rapid development of AI technology in one sector (e.g., the military) while failing to allocate resources to and incentivizing the development of AI to bolster other actors in civic space. Countries should consult with civil society to ensure that national AI policies create an enabling environment for civic life.
At the local level, municipalities like New York City, Santa Clara and Seattle, have adopted ordinances that include civic oversight when systems based on AI are deployed. These ordinances allow the public, through elected officials or government commissions, to know where AI technology is being used and what its effects are, in some instances providing recommendations on its usage. Without public oversight, the transparency realized from access to information laws is jeopardized; the algorithms underpinning AI are proprietary and thus not available for public review. Laws and regulations in other areas, like data protection, public procurement, tort/liability, anti-discrimination, and public hearings should also be re-examined to account for the emergence of AI.
Enhancing AI fluency.
Our partners at the frontlines of civic space frequently tell us that they do not have the knowledge about AI to engage with policy makers. Although there are many international conferences for CSOs on AI and digital rights, easily accessible workshops for grassroots leaders will help them understand what AI is, how it works, and how it affects their work. For example, education components that ICNL has led in other areas have allowed grassroots activists to engage constructively on other issues, including cyber-crime and counter-terrorism legislation. Once CSO leaders understand the AI universe, they will be able to participate in the creation of AI policies that promote civic space, and recognize how to utilize AI in their work.
Additionally, creating independent agencies to assist lawmakers will help policymakers make informed decisions on AI policy, its monetary costs, and social impact. Similar to the UK’s Parliamentary Office of Science and Technology (POST) or the independent budget offices in the US, Sweden, and Australia, these new offices would provide independent analyses of AI technologies and policies, and provide recommendations based on real costs and benefits to governments, society, communities and human rights.
Utilizing AI for Good.
CSOs and technologists need to have more conversations about how CSOs can utilize AI for good. Not only can it be used at the operational level to streamline internal processes, it can also be used to provide better, faster services to constituents. For example, a Russian CSO developed a bot that provides real-time legal assistance to protestors. Donors are also already using robo-advisors to construct their philanthropic portfolios, and it will not be long before people can use Siri, Alexa, or other AI-powered “personal assistants” to make donations to their favorite charity.
For these advances to be realized, however, CSOs need access to AI technology, which is expensive. Several companies are already assisting them in utilizing AI, like Microsoft, Google, IBM and others, but the reality is that CSOs have limited resources to invest in emerging technologies. We need additional ways to provide low-cost and pro-bono expertise to CSOs on AI issues, perhaps through government-subsidized knowledge transfers or through initiative where tech employees work at a CSO for several weeks. Such exchanges will foster organizational literacy in AI, while highlighting unknown pitfalls when adopting emerging technologies.
The race is on
This is a critical time. AI is developing rapidly, but its potential to promote civic freedoms is just beginning to emerge. It is crucial for civil society to engage at the global, national, local and organizational levels to ensure that no one, including marginalized communities and individuals, is left out of this new era. AI is a robust tool that needs to be thoughtfully developed and regulated to limit exploitation, and harnessed to empower civil society.
Zach Lampell leads the global freedom of expression program at the International Center for Not-for-Profit Law (ICNL), which focuses on protecting and promoting freedom of speech online as well as digital rights affecting civil society. Zach is also responsible for developing ICNL’s programming related to artificial intelligence.
Lily Liu works on a range of civic space issues at the International Center for Not-for-Profit Law (ICNL), including protecting and promoting civic freedom in different regions.