Policy

What Do We Mean When We say Open Source AI?

Reference

Abstract

The rhetoric of open source AI has risen alongside the conversation around generative AI. This despite the obvious challenges to reproducibility and extension of AI systems, such as the cost of compute, the inscrutability of the dominant machine learning models, and need to preserve data privacy. These constraints come in the way of using, studying, modifying and distributing AI models- central tenets of open source software. What then do we mean by open source AI?

In this talk we will critically analyze the different descriptions of openness deployed by and for AI products. What aspects of openness do they highlight and why? By recognizing upfront the material challenges to reproducibility, transparency and extension in machine learning, we will surface strategies that can continue the values of open source software in machine learning development.

About the speaker

Tarunima Prabhakar

Tarunima is the research lead and co-founder of Tattle, an open source, civic tech project building solutions to respond to inaccurate and harmful content in India. Tattle's work includes AI models to analyze multi-lingual and multimodal content, as well as moderation of gendered abuse in Indian languages. Her broader research interests are in the intersection of technology, policy and global development. As a practitioner, she has worked on ICTD and Data driven development projects with non-profits and tech companies in Asia and the United States.

Comments
Want to discuss?
Post it here, our mentors will help you out.