Resources

On Digital Care and Cruelty

Algorithmic Justice Lab’s CRASH Project (Community Reporting of Algorithmic Systems Harms)

Data Justice Lab (out of Cardiff University’s School of Journalism, Media and Culture)

The Guardian UK‘s ‘Automating Poverty’ global series

Open-access to Sasha Costanza-Chock’s 2020 book Design Justice: Community-Led Practices to Build the Worlds We Need (MIT Press)

University of Toronto’s Citizen Lab 2019 report “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System”

From Global Data Justice’s ‘Data and Pandemic Politics’ series – “Government Data Practices as Necropolitics and Racial Arithmetic” by Rashida Richardson (October 2020)

Philip Alston for OpenGlobalRights: “What the ‘digital welfare state’ really means for human rights” (January 8th, 2020)

Library of Congress.gov’s Global Legal Monitor – “Netherlands: Court Prohibits Government’s Use of AI Software to Detect Welfare Fraud” (March 13th, 2020)

Via CBC News: “A city plagued by homelessness build AI too to predict who’s at risk” (describes the Chronic Homelessness Artificial Intelligence project being developed out of London, ON, Canada)

Via Toronto Star: “Ontario’s welfare computer glitches are not the first” (looks at issues with the Government of Ontario’s implementation of the Social Assistance Management System (SAMS), dated January 25th, 2015)

Christiaan van Veen for the Oxford Human Rights Hub: Why the Digitization of Welfare States is a Pressing Human Rights Issue (Dec. 2019)

UNHRC has the emergence of the ‘digital welfare state’ on their radar (via UN Special Rapporteur on extreme poverty and human rights)

MIT Tech. Review – Jessica Whittlestone: “If AI is going to help us in a crisis, we need a new kind of ethics” (June 2020)

“Tech Giants are Using This Crisis to Colonize the Welfare System” – João Carlos Magalhães and Nick Couldry for Jacobin (April 27th, 2020)

Via Stat: “Ten steps to ethics-based governance of AI in health care”, by Satish Gattadahalli (Nov. 3rd, 2020)

MIT Tech. Review – Karen Hao: “The coming war on the hidden algorithms that trap people in poverty” (December 2020)

Via ProPublica: “IRS: Sorry, but It’s Just Easier and Cheaper to Audit the Poor” – Paul Kiel (Oct 2nd, 2019)

From Data & Society – Report: “Poverty Lawgorithms: A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities” by Michele Gilman (Sept. 2020)

On Digital Policing

“With AI and Criminal Justice, the Devil is in the Data” – Vincent Southerland for the American Civil Liberties Union (April 9th, 2018)

Mass surveillance technology in Canadian policing (Clearview AI and the RCMP), Jamie Duncan and Alex Luscombe, Feb. 2021

Via ProPublica: “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks” – Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kircher (May 23, 2016)

“ICE just signed a contract with facial recognition company Clearview AI” The Verge, Kim Lyons (Aug. 14th 2020)

University of Toronto’s Citizen Lab 2020 report “To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada”

“Use of facial recognition technology by police growing in Canada, as privacy laws lag”, David Burke via CBC News, Feb. 2020

Kashmir Hill for the New York Times: “Wrongfully Accused by an Algorithm” (June 24th, 2020/August 3rd, 2020)

Clare Garvie for Georgetown Law’s Center on Privacy and Technology: Garbage In, Garbage Out: Face Recognition on Flawed Data (May 16, 2019)

Collated information on racial discrimination in face recognition technology, blog post by Alex Najibi for Harvard’s ‘Science in the News’ initiative (Oct. 2020)

MIT Media Lab’s “Gender Shades” Project Page (The Gender Shades project evaluates the accuracy of AI powered gender classification products)

MIT Tech. Review – Tate Ryan-Mosley: “There is a crisis of face recognition and policing in the US” (August 2020)

Via the Toronto Star: “From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines”, John Lorinc (Jan 12, 2021)

Kashmir Hill for the New York Times: “Your Face Is Not Your Own” (On Clearview AI and Privacy in America, March 18th, 2021)

“Bias in facial recognition isn’t hard to discover, but it’s hard to get rid of”, interview with Joy Buolamwini, MIT Media Lab – via Marketplace.org (March 18th, 2021)

Arwa Mahdawi for The Guardian: “Facebook doesn’t seem to mind that facial recognition glasses would endanger women” (Feb 27, 2021)

University of Pennsylvania Law School’s The Regulatory Review: Facing Bias in Facial Recognition Technology, B. Rauenzahn, J. Chung, and A. Kaufman (March 2021)

Mutale Nkonde for the Harvard Kennedy School Journal of African American Policy: Automated Anti-Blackness: Facial Recognition in Brooklyn, New York (.pdf, published in 2019-2020 volume of journal)

Via CBC News: “Canadians can now opt out of Clearview AI facial recognition, with a catch: Controversial U.S. firm requests picture to identify other images of any user in database” (July 10, 2020)

Jane Bailey, Jacquelyn Burkell and Valerie Steeves for The Conversation: AI technologies – like police facial recognition – discriminate against people of colour (August 24, 2020)

General

Algorithm Watch’s AI Ethics Guidelines Global Inventory