The Political Roots of AI Policy: Or Why Debiasing Is A Misleading Solution

Check Reference

Debiasing, explainibility research, and benchmarks are the three active research areas in AI which have become the mainstay of AI safety research. This talk aims to demonstrate why each of these suffer from fundamental limitations and are unlikely to more than scratch the surface of the "wicked problems" which abound in policymaking around AI. This talk will begin with explaining why machine learning systems, the dominant tendency in AI research, are social systems in addition of being technical systems and thus impact and are impacted by social reality, their impacts creating political problems. These impacts arising from the fundamental nature of sophisticated ML like stochasticity and a built in opaqueness, the talk will argue go beyond characterisations like "bias", and both debiasing and explainibility research become solutionist attempts and hence misleading. Finally the talk will go into hard limits on benchmarking and why attempts to quantify truth confuses precision for reality, and leads to issues like the McNamara fallacy.

Comments
Want to discuss?
Post it here, our mentors will help you out.