Policy

The Political Roots of AI Policy: Or Why Debiasing Is A Misleading Solution

Reference

Abstract

Debiasing, explainibility research, and benchmarks are the three active research areas in AI which have become the mainstay of AI safety research. This talk aims to demonstrate why each of these suffer from fundamental limitations and are unlikely to more than scratch the surface of the "wicked problems" which abound in policymaking around AI. This talk will begin with explaining why machine learning systems, the dominant tendency in AI research, are social systems in addition of being technical systems and thus impact and are impacted by social reality, their impacts creating political problems. These impacts arising from the fundamental nature of sophisticated ML like stochasticity and a built in opaqueness, the talk will argue go beyond characterisations like "bias", and both debiasing and explainibility research become solutionist attempts and hence misleading. Finally the talk will go into hard limits on benchmarking and why attempts to quantify truth confuses precision for reality, and leads to issues like the McNamara fallacy.

About the speaker

Anupam Guha

Anupam Guha is an Assistant Professor at the Ashank Desai Centre for Policy Studies at IIT Bombay who primarily works on AI, AI policy, AI and labour. He has a PhD, batch of '17, in computer science from the University of Maryland where he worked on multimodal language and vision AI systems. He has also worked in the industry as an AI researcher from 2017 to 2019 on NLP systems. His current work in AI policy is informed by a technical understanding of AI and its relationship with labour and capital. He works to expand the critical lens on AI from the current instrumental and normative frameworks to one informed by an immanent critique of the political economy of techno-social systems.

Comments
Want to discuss?
Post it here, our mentors will help you out.