Our post-COVID future should be as much about welfare as it is about tech

Surveillance thrives in unequal environments, and the pandemic has increased inequality. We need a welfare state for our digital information economy.



Automobiles pass the iconic Facebook 'Like' thumbs up sign at the entrance to Facebook Headquarters in Menlo Park, California, USA, 05 May 2020. The sign is being displayed in support of health care workers on the frontlines battling the coronavirus and COVID-19 disease pandemic. EFE/EPA/JOHN G. MABANGL 


Even as states and countries start opening up from the pandemic lockdown, it will remain the case that many work, education, and commercial activities are facilitated by platforms. Naomi Klein has warned this is an opportunity for the most powerful actors to further entrench in our societies their logic of data-exploitation and intense inequality. But if we build social safety nets that give individuals bargaining power, and allow them to trust and collaborate with wider public initiatives to keep the virus at bay, much of this dystopian future can be avoided.

First, platforms are crucial today to facilitate our communications and how we go about life, and their business models are expanding: Google and Facebook are at the forefront of the contract-tracing applications now being deployed in some European countries; mayors and governors all over are conversing with technology companies to track the disease and identify people who might be infected. These companies are being called to rebuild our cities’ infrastructure—now in a way that is “pandemic-proof”—and companies of all kinds are already buying software to control their remote workers. This shift is speeding a trend that was already thriving in gig-work and warehouses, where companies digitally monitor employee performance, efficiency, and overall on-the-job conduct.

Second, we know that the pandemic is increasing the already staggering levels of inequality at global and national levels. The poor already bear a disproportionate burden of morbidity and mortality; they are more exposed and have worse underlying health conditions associated with poverty, such as malnutrition, psychological stress, high blood pressure, diabetes, and heart disease. Additionally, low-wage workers, including many women and members of racial and ethnic minorities, have been hit especially hard by the job losses caused by the economic slowdown. Experts say it’s the worst devastation since the Great Depression.

The bottom line is that companies can only instill their logic of data-exploitation and intense inequality in our societies if we let them.

The increased attractiveness of technology to control and prevent some societal risks and increased inequality is a dangerous mix: economic and pandemic-related stress will make it easier for companies to push for surveillance technologies and practices in the workplace, institutions, and public spaces. People under economic pressure and worrying to make ends meet have fewer options to opt-out and will “have to consent”—a contradiction in terms—to surveillance practices in their workplaces and public spaces; not because they are free to do so, but because they are in need.

Being somewhat free of surveillance will be even more of a luxury. Where individuals have access to welfare, unemployment insurance, and so on, not only may the economy recover more quickly—as the demand side of it will have been taken care of—but individuals and groups will be less strained to consent to practices that they disagree with. A recent example in France shows, for example, that workers’ bargaining power may even help make going back to work safer: a judge ruled, after a labor union’s complaint, that Amazon could not deliver non-essential goods until it instituted enough safety precautions for its workers to protect against the virus. More protected workers and citizens could, perhaps, help contain surveillance too.

The bottom line is that companies can only instill their logic of data-exploitation and intense inequality in our societies if we let them. The bargaining power of different actors in a given society is shaped by laws, rules, and institutions. Today, however, the balance is heavily inclined towards corporate power and against individuals, families, and workers: social security programs that don’t create an alternative if we lose our jobs, laws that tax the middle class heavily and the super-rich far less so, making it hard to save for retirement or for a hard time, laws that make health-insurance an expensive luxury, laws that set incredibly low minimum wages, laws that enable companies to reap the benefits of the data we too produce with them, and so on. In sum, the benefits generated in our digitally enabled capitalism are distributed in a way that, right now, leaves too many far behind.

Better safety nets are not only important to check on corporate power but, especially, to enable a positive deployment of technology. With COVID-19, it may be the case that people who have access to better social safety-nets are more able and willing to collaborate with various public-health strategies, including technology enabled ones, like the adoption of voluntary contract tracing apps. The Apple and Google protocol, for example, now being adopted in Germany and Switzerland, is voluntary and no personal identifiable information ever leaves an individual’s phone—this keeps it safe from both governments or the companies themselves.

Avoiding a digitally enabled dystopia requires not only checking on tech, but checking the institutional backgrounds where tech is deployed.

In their functioning, these applications are more similar to a collaborative and voluntary network where individuals can participate to help their communities and themselves keep track of the virus. People may be more able to self-isolate if they feel they should if they have some form of insurance and trust, for example, that they won’t lose their jobs. It is unclear if such a strategy would work in the US where many workers fear that if they stay at home—even if they are sick or have symptoms—they might lose their jobs. 

Technology doesn’t operate in a vacuum. How it is deployed and used depends greatly on the institutional frameworks of our societies. What if the early days of the platform collaborative economy—where it was still more about meeting strangers on Couchsurfing than renting a bedroom to make ends meet, or spend some time working on Wikipedia or Lynux for the fun of it—were possible because there was a little more abundance (at least for some)? What if part of the sharing economy is that we need spare time and mental bandwidth to actually share because we’re not worrying about the next paycheck or the next health-bill?

Avoiding a digitally enabled dystopia requires not only checking on tech, but checking the institutional backgrounds where tech is deployed. If we don’t want surveillance dystopian futures, but rather, societies where non-exploitative and more cooperative models can thrive, we need to think too about how we position users and workers in the imaginary bargain that happens in all commercial, and non-commercial transactions. In the early 20th century, when other mega-corporations started playing crucial roles in society—like railroad companies or the telegraph—many were regulated like infrastructures, and rules were enacted that forced them to, for example, carry the content or loads of potential competitors, serve all communities, and pay minimum wages. In the US, it was the New Deal.

It’s cliché by now—and in many places, it may sound impossible—but the NHS and the welfare state in the UK were created after World War II, and Sweden’s welfare system was created after the Spanish flu devastated the country. Crisis can be a time of change and opportunity.

Many of us are hoping the worst is behind us, but the truth is that we don’t know. It would be truly unfortunate if we missed this chance to make things really better.

Previous versions of this piece appeared in the Medium Collection of Berkman Klein Center for Internet & Society and ICT4Peace.

 

 

ORIGINALLY PUBLISHED: July 22, 2020

Beatriz Botero Arcila is a fellow at the Berkman Klein Center for Internet and Society at Harvard University and doctoral candidate at Harvard Law School.


 

COMMENTS