Digital care & cruelty

Digital Care and Cruelty: Social Provisioning and Deprivation in the Era of Big Data 

Webinar video | Summary report | Biographies | Resources

Featuring: Virginia Eubanks (SUNY, Albany), Sasha Costanza-Chock (MIT), Joanna Redden (Western). Hosted by: Alissa Centivany (Western).

28 January 2021, 7PM

What good can big data, automation and artificial intelligence do for individuals in need of social assistance and what harms can it perpetuate?

The first in our Big Data at the Margins series explores the impacts of artificial intelligence, big data and digital technologies on those in need of social supports and resources in smaller towns and cities across Canada. Increasingly, cash-strapped city governments are outsourcing decisions about who can receive social benefits, such as housing, health, or other social services, to privately-owned software providers. While these outsourced, algorithmically determined decisions may expedite access, they are not transparent and can contain unacknowledged biases. As a result, people in need can find themselves on the wrong end of an opaque decision they are unable to challenge.

This event is funded with the generous assistance of the Faculty of Information & Media Studies at Western University, Western Research, and the Social Sciences and Humanities Research Council of Canada. 

Back to top | Webinar video | Summary report |Biographies | Resources

Panel discussion summary report

Panelist Dr. Virginia Eubanks describes the consequences of automated decision-making in public service programs and the advent of the “digital welfare state”. Panelist Dr. Sasha Costanza-Chock discusses the advent of algorithmic necropolitics at the crux of the first wave of the COVID-19 pandemic. Panelist Dr. Joanna Redden addresses the question: what does care and cruelty look like in datafied societies when we observe the project of automating (and eventually cancelling) public services systems? Key themes that emerged include:
The abstract nature of the discourse around technology use and social justice
The importance of clear definitions in the design and use of AI and data systems
The importance of a respectful, nurtured, and mutually productive relationship between academic researchers and community partners
The distinct contexts in which data scientists and system developers operate
The importance of empathy in the development, deployment, and use of AI and data systems

Read full summary report (PDF).

Back to top | Webinar video | Summary report |Biographies | Resources

Biographies

Virginia EubanksVirginia Eubanks is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; Digital Dead End: Fighting for Social Justice in the Information Age; and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith. Her writing about technology and social justice has appeared in Scientific American, The Nation, Harper’s, and Wired. For two decades, Eubanks has worked in community technology and economic justice movements. She was a founding member of the Our Data Bodies Project and a 2016-2017 Fellow at New America. She lives in Troy, NY.

Sasha Costanza-Chock (pronouns: they/them or she/her) is a scholar, activist, and media-maker, and currently Associate Professor of Civic Media at MIT. They are a Faculty Associate at the Berkman-Klein Center for Internet & Society at Harvard University, Faculty Affiliate with the MIT Open Documentary Lab and the MIT Center for Civic Media, and creator of the MIT Codesign Studio (codesign.mit.edu). Their work focuses on social movements, transformative media organizing, and design justice. Sasha’s first book, Out of the Shadows, Into the Streets: Transmedia Organizing and the Immigrant Rights Movement, was published by the MIT Press in 2014 and second, Design Justice: Community-Led Practices to Build the Worlds We Need, by MIT Press in 2020. They are a board member of Allied Media Projects (AMP); AMP convenes the annual Allied Media Conference and cultivates media strategies for a more just, creative and collaborative world (alliedmedia.org).

Joanna ReddenJoanna Redden is the co-founder of the Data Justice Lab and assistant professor in the Faculty of Information and Media Studies at The University of Western Ontario. Her research focuses on algorithmic governance, particularly how government bodies are making use of changing data systems, and the social justice implications of these changes. This work has involved mapping and assessing government uses of data systems as well as documenting data harms and learning from those trying to redress these harms in policy, community and activist contexts. Joanna’s work has been published in such journals as the Canadian Journal of Communication; Policy Studies; Information, Communication and Society; and Scientific American. She has previously held fellowships at the Infoscape Centre for the Study of Social Media at Metropolitan Toronto University, and the Goldsmiths Leverhulme Media Research Centre.

Back to top | Webinar video | Summary report |Biographies | Resources

Resources

Community Reporting of Algorithmic Systems Harms Project (Algorithmic Justice Lab)

The Algorithmic Justice Lab’s CRASH (Community Reporting of Algorithmic Systems Harms) Project “brings together key stakeholders for discovery, scoping, and iterative prototyping of tools to enable broader participation in the creation of more accountable, equitable, and less harmful AI systems.”

Data Justice Lab at Cardiff University School of Journalism, Media and Culture

The Data Justice Lab’s engages in research that “examines the implications of institutional and organizational uses of data and provides critical responses to potential data harms and misuses.”

Automating Poverty” (The Guardian)

“Automating Poverty” is a collection of seven articles published by The Guardian that look at how governments target vulnerable populations using AI.

Design justice: Community-led practices to build the worlds we need by Sasha Constanza-Chock

Constanza-Chock’s book Design Justice is “an exploration of how design might be led by marginalized communities, dismantle structural inequality, and advance collective liberation and ecological survival.” The full text of this book is open access via the link above.

Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system” (PDF) (University of Toronto Citizen Lab)

Bots at the Gate is a report by the University of Toronto’s Citizen Lab that “focuses on the impacts of automated decision-making in Canada’s immigration and refugee system from a human rights perspective”.

Government data practices as necropolitics and racial arithmetic” by Rashida Richardson (Global Data Justice)

Richardson “interrogates the way in which the collection and use of data on people of colour by US authorities both follow and amplify racial logics of control and oppression. Health data practices invisibilize racial inequity by under-reporting differences, whereas policing data practices create inequity and harm through a forensic focus on communities of colour.” Part of Global Data Justice’s Data and Pandemic Politics series of essays.

What the ‘digital welfare state’ really means for human rights” by Philip Alston (OpenGlobalRights)

In this article for OpenGlobalRights, Alston contends that “the digitalization of welfare is presented as an altruistic and noble enterprise designed to ensure that citizens benefit from new technologies. In reality, it often leads to reduced budgets, restricted eligibility, and fewer services.”

A city plagued by homelessness builds AI tool to predict who’s at risk” (CBC News)

The City of London, Canada has begun using an artificial intelligence tool called the Chronic Homelessness Artificial Intelligence model (CHAI) to identify people who may be at risk for becoming chronically homeless.

Why the digitization of welfare states is a pressing human rights issue” by Christiaan van Veen (Oxford Human Rights Hub)

In this post for the Oxford Human Rights Hub, van Veen “discusses a report presented by United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, to the UN General Assembly” which van Veen hopes will “help to convince human rights, digital rights, welfare rights advocates and other groups that digitization in the welfare state poses a serious threat to a range of social, economic, civil and political rights, and that it deserves a more forceful response.”

If AI is going to help us in a crisis, we need a new kind of ethics” by Jess Whittlestone (MIT Technology Review)

An interview with Jess Whittlestone of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she argues for “a new, faster way of doing AI ethics”. She argues that “ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.”

Tech giants are using this crisis to colonize the welfare system” by João Carlos Magalhães and Nick Couldry (Jacobin)

João Carlos Magalhães and Nick Couldry write in Jacobin that tech giants “like Google and Facebook have used the Global South as a test bed for new and unregulated forms of data collection. Faced with coronavirus, the same mechanisms are being rolled out across the world — with for-profit data collection becoming increasingly central to states’ management of their welfare systems.”

Ten steps to ethics-based governance of AI in health care” by Satish Gattadahalli (STAT)

Satish Gattadahalli, writing for STAT, contends that the use of AI for physical and mental health care ” use in health care “raises ethical issues that are paramount and fundamental in order to avoid harming patients, creating liability for health care providers, and undermining public trust in these technologies”. Gattadahalli proposes ten actions to undertake to develop an ethics-based approach to the use of AI in health care.

The coming war on the hidden algorithms that trap people in poverty” by Karen Hao (MIT Technology Review)

Karen Hao writes in MIT Technology Review that, as governments increase their use of AI to help administer social programs such as welfare, unemployment, housing, and so on, it is low-income people who are hardest hits by the effects. But she notes that “[a] growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.”

Back to top | Webinar video | Summary report |Biographies | Resources