Proposed by: Sandhya Baskaran
Have an AI/ML Model? Here is an Assessment to Identify Security Threats for It
Much like cars, AI offerings need to have gone through crash tests to ensure their safety and reliability. Similarly, they need to be equipped with a digital version of seat belts, brakes, and air bags. However, just as a 16-wheel truck’s brakes are different from that of a standard hatchback, AI models too may need distinct analyses based on their risk, size, application domain, and other factors. Prior research has attempted to do this, by identifying areas of concern for AI applications and tools needed to understand the effect of adversarial actors. However, currently a variety of frameworks exist with different areas of focus, classification, and terminology. In this talk, we discuss initial findings from our meta-analysis of 14 different AI threat modeling frameworks to offer a simplified roadmap for scaling AI threat analysis. This threat library tracks threats across different frameworks and provide one model to perform manual threat analysis.
Source code/Reference: https://github.com/Comcast/ProjectGuardRail
Talk duration: