Can Internationally Accepted Principles Yield Trustworthy AI?

June 4, 2020
Washington D.C.

Can Internationally Accepted Principles Yield Trustworthy AI? That is the question that The Future Society’s Nicolas Miailhe, Berkman Klein Center’s Ryan Budish, State Department’s Adam Murray and Microsoft’s Carolyn Nguyen will discuss at a virtual event organized by George Washington University’s DataGovHub and its partners.

Indeed, to limit the harms and seize the opportunities of AI, in 2019, the 37 members of the OECD (and 7 non-members) approved Principles on Artificial Intelligence, the first internationally accepted principles for AI. The principles include recommendations for policymakers and all stakeholders. The OECD is not the only body working on such principles. The members of the G-7 are also working on mutually agreed principles to govern trustworthy explainable AI. 

This webinar will explore these principles, focusing in particular on those at the OECD, which our speakers helped design. The participants discussed whether these principles can help all stakeholders. Moreover, we will examine whether such principles should evolve into an internationally shared rules-based system, given the wide diversity in national capacity to produce and govern AI.

Picture by Casey Horner