This article was published by The Future Society in Advisory Services, The AI Initiative on February 20, 2020

Briefing: A List of AI Governance Levers

Main insight

Over the past few years AI governance and policy actors including companies, regulators, academics and nonprofits have put forward many different initiatives to ensure AI development and applications have appropriate ethical and safety precautions – enabling ‘trustworthy’ or ‘responsible’ AI. This briefing lists these AI governance approaches rather than analyzes, in order to inform all possible options in a ‘toolbox’.

Advisory ServicesThe AI Initiative
February 20, 2020
4 min read

Over the past few years AI governance and policy actors including companies, regulators, academics and nonprofits have put forward many different initiatives to ensure AI development and applications have appropriate ethical and safety precautions – enabling ‘trustworthy’ or ‘responsible’ AI. This briefing lists these AI governance approaches rather than analyzes, in order to inform all possible options in a ‘toolbox’. In practice, a combination of AI governance approaches is needed since each brings strengths and weaknesses and is relevant for different objectives and contexts. This briefing is based on a live list that will be continuously updated; please comment with any new suggestions.

Government Policies

  • Regulation (e.g. EU’s GDPR, California Consumer Privacy Act (CCPA), San Francisco & Boston restrictions or bans on government use of facial recognition technologies, California’s requirement for bots disclosure)
  • Funding research grants or projects for safe, explainable, or otherwise ethical AI (e.g. U.S. DARPA’s program for explainable AI)
  • Public procurement requirements include ethical criteria (e.g. Canada’s list of pre-qualified responsible AI suppliers, The AI-RFX Procurement)
  • Sector-specific policy guidance or frameworks (e.g. U.S. Draft Memorandum for the Heads of Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications)

Industry self-governance

  • Endorsing internal ethical principles (e.g. Microsoft’s AI Principles)
  • Endorsing external ethical principles (e.g. Asilomar Principles, Montreal Declaration for Responsible AI, OECD AI Principles, European Commissions High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI, etc.)
  • Voluntary labeling of AI products & services as complying with the above principles, e.g. EU Ethics Guidelines for Trustworthy AI
  • Internal ‘Ethics officers’ or Chief Ethics Officer
  • Technology advisory bodies/councils/committees comprised of in-house or external advisors (e.g. Microsoft AETHER Committee)
  • Reference for board of directors or C-Suite oversight (E.g. WEF Oversight Toolkit for Boards of Directors)
  • Mandatory ethics courses in-house
  • Ethics built into design in engineering projects
  • Technical tools to simplify mitigating risks (e.g. Interpret ML, The AI Fairness 360 Toolkit, The TensorFlow Privacy library)
  • Documentation about AI systems to list characteristics or benchmark evaluation of ML models: ‘model cards’ (e.g. Google), ‘nutrition labels’, Partnership on AI’s ABOUT ML or data sets e.g. ‘datasheets’ explaining data collection, processing and composition for datasets.
  • Reference to a checklist (e.g. The Assessment List of the Ethics Guidelines for Trustworthy AI) or models (e.g. The a3i Trust-in-AI framework) for trustworthy AI systems
  • Run red teaming exercises or offer bounties for safety, bias and security
  • Legal status (e.g. nonprofit, for profit, ‘capped profit’)
  • Publication norms for responsible AI (See Partnership on AI), open, or non-disclosure (See MIRI)

Third parties & standards associations

  • Technical standards or guidelines e.g. IEEE P7000™ Standards Series or ISO
  • Third party audits (e.g. independent audits of algorithmic systems)
  • Certifications for staff operationalizing AI systems (mandatory or voluntary) (e.g. IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS))
  • Technology review boards (external)
  • Toolkit or resources for advocates to audit and support transparency and accountability (e.g. AI Now Institute Algorithmic Accountability Policy Toolkit)
  • Insurance policies

Public accountability

  • Courses to educate the public about AI to better hold AI developers accountable (e.g. Finland’s Elements of AI course)
  • Fear of public backlash, loss of consumer trust, and negative press or social media
  • Positive press and praise for companies being ethical
  • Ethics coursework in computer science courses
  • Processes for employees for whistle-blowing or denying unethical projects

Decentralized and distributed technology solutions

  • Incentive mechanisms including based on cryptoeconomics

References

Compiled by Yolanda Lannquist based on presentations and publications by the author, Nicolas Miailhe, Jessica Cussins Newman, Professor Wendell Wallach, Professor Allan Dafoe, Professor Francesca Rossi, the AI Now Institute, among others.

Special acknowledgement for the forthcoming publication from which I’ve gathered several examples: Jessica Cussins Newman, 2020, AI Decision Points: Three Case Studies Explore Efforts to Operationalize AI Principles. UC Berkeley Center for Long-Term Cybersecurity White Paper Series.

For a list of AI ethics principles and guidelines by companies, governments, industry associations, civil society and more see Algorithm Watch’s AI Ethics Guidelines Global Inventory.

Brundage et al., 2020, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.

Cover photo from William Santo on Unsplash