Human rights and the smarter city

Credit: Alejandro Ospina

Human rights obligations do not apply to national governments alone. There are dozens of self-identified “human rights cities” in the United States—from Mountain View, California, to Jackson, Mississippi, to our home base of Boston, Massachusetts. Because of their municipal commitment to human rights, these cities are well-positioned to take the lead in addressing challenges presented by the rapidly expanding interface between humans and technology.

Concerns about the relationship between humans and machines are as old as the wheel and the lever. The mass introduction of the World Wide Web and the smartphone added digital and social dynamics to humanity’s relationship with machines, raising concerns about how a more ubiquitous connectivity might facilitate, govern, and limit our everyday lives. As the role of artificial intelligence expands, the issues raised by this relationship become both more complex and more urgent. The potential harms of artificial intelligence, such as data bias or misinformation, are intertwined with its benefits, and without well-defined, enforceable norms, it is difficult to control the spread and use of these technologies.

Boston: An example of a smart city

Boston is a leader among US cities in its exploration of what a “smart city” might be. It has created the nation’s first local government innovation office, is one of the first cities to set up a Code for America brigade, and has generally taken the measured risk of learning about technological deployment by doing it. From its early partnerships developing a citywide 311 system, collaborating on autonomous vehicle testing, and facilitating community-led air quality sensor deployment, Boston has emerged as a city known not only for its openness to technology but also its dedication to ensuring that technologies deployed in the public realm are, in fact, serving the public interest.

Our team of researchers and practitioners, with expertise in tech policy, community-engaged research, and human rights, is working with city officials to map out the steps that Boston can take to make sure it becomes a smart human rights city, where human rights norms inform tech policy. We have identified six core areas of human rights concern that often accompany smart city campaigns: (1) shrinking civic rights, (2) expansion of private power in the public sphere, (3) de-prioritization of justice and inclusivity, (4) lack of sustainability, (5) impingement on rights to privacy, and (6) trade-offs in technology implementation that reinforce existing power dynamics.

We found that smart city projects generally fit into either top-down or bottom-up frameworks. Top-down projects include Singapore’s Tengah eco-town and corporate collaborations like Alphabet’s Quayside Project in Toronto, IBM’s contract to improve Digital On-Ramps, and Verizon’s collaboration with the City of Boston to increase broadband and high-speed internet access. These projects were defined by broad-reaching, bold initiatives, often focused on scaling and replicability. However, typical issues include flattening the needs of affected populations and misalignments in goals between stakeholders, particularly in corporate collaborations with public governance structures. For example, while Verizon’s incentive to work with the city of Boston was the project’s cost-cutting potential, the city’s interest was the potential to increase service access and quality to a wider swath of Boston’s citizens.

A better approach to ensure human rights in smart cities

We found that bottom-up frameworks often engaged with the potential pitfalls of smart cities more thoroughly. This approach led to more decentralized projects with less conspicuous hierarchies of power, which allowed them to better return agency to affected populations. Because these projects were not imposed from above, we found more place-focused initiatives that upend the expert paradigm and concentrate power with residents and affected populations. These projects were often carried out on a smaller, hyper-local scale and focused on solving specific problems or fitting into specific ecosystems. Examples include town halls in Boston’s Allston neighborhood conducted through Second Life, a virtual world platform, as well as “digital democracy” models of community participation such as Decide Madrid, Taipei’s Alignment Assemblies, the Participate Melbourne project, and the reimagination of AI tools from resource-intensive large language models to community-driven small language models.

These examples demonstrate that human rights challenges are not insurmountable. Indeed, recognizing and addressing these challenges can enhance municipal governance and the well-being of city residents. This requires three key components. 

First, local policymakers must be prepared to identify the potential human rights impacts of expanded technology use. Support from the top of the city hierarchy—including a public commitment to human rights—will encourage municipal employees to see this as part of their work. Specialized training to highlight potential issues may also be helpful. In Boston, we are planning several workshops with local actors to begin these conversations and explore how municipal tech experts and their peers experience the interface between their work and human rights—the first of these saw high engagement from city workers. 

Second, local governments must provide human rights–informed processes for public involvement and feedback, as participation of those most affected by a policy is itself an important human right recognized by international human rights bodies. Several cities have pioneered effective ways to ensure public participation in tech policy development. For example, digital democracy platforms like Decidim Barcelona facilitate large-scale participatory decision-making and community policy deliberation. Closer to home, cities such as Rochester, Minnesota, have employed co-design concepts that make affected communities full participants in decision-making regarding initiatives that will affect them. This past year, Boston held its first citywide participatory budgeting process, expanding on a youth initiative that has been in place for several years. These human-centered approaches to local governance are adaptable to the tech sector and, at the same time, honor the human right to participation. Going forward, our team will host conversations in diverse Boston communities to learn more about their experiences, challenges, excitement, and interest when it comes to the city’s use and deployment of AI. Our focus this summer will be on young people’s experience of public spaces in the city.

Third, local governments must ensure that private contractors involved in a tech transition identify and prioritize human rights obligations. Procurement policies that are aligned with human rights parameters are key. The United Nations Guiding Principles on Business and Human Rights provide a starting place for such local policies. As larger cities adopt this approach, tech contractors will adjust their practices, with benefits for smaller municipalities that hold less sway in the marketplace. The City of Boston is currently refining its technology procurement standards, particularly in the fields of data collection and constituent experience. Although not yet formally organized around a human rights framework, the city hopes to learn from our project how it can shift its approach in numerous departments. 

As we explore this issue in our hometown of Boston, we are aware that each community is different. Boston is dense with universities, but it is also one of the more racially segregated cities in the United States. Those factors will need to inform our city’s efforts to become a smarter human rights city. We hope, however, that what we learn through this process will prove valuable as other municipal governments around the country address human and machine interfacing and step up to ensure that human rights are honored in everyday interactions within local communities.

The authors' work is a collaboration between the NuLawLab and the School of Public Policy and Urban Affairs at Northeastern University.