Skip to content

Main Insight

Over the past few years AI governance and policy actors including companies, regulators, academics and nonprofits have put forward many different initiatives to ensure AI development and applications have appropriate ethical and safety precautions - enabling 'trustworthy' or 'responsible' AI. This briefing lists these AI governance approaches rather than analyzes, in order to inform all

Briefing: A List of AI Governance Levers

February 20, 2020

Over the past few years AI governance and policy actors including companies, regulators, academics and nonprofits have put forward many different initiatives to ensure AI development and applications have appropriate ethical and safety precautions – enabling ‘trustworthy’ or ‘responsible’ AI. This briefing lists these AI governance approaches rather than analyzes, in order to inform all possible options in a ‘toolbox’. In practice, a combination of AI governance approaches is needed since each brings strengths and weaknesses and is relevant for different objectives and contexts. This briefing is based on a live list that will be continuously updated; please comment with any new suggestions.

Government Policies

  • Regulation (e.g. EU’s GDPR, California Consumer Privacy Act (CCPA), San Francisco & Boston restrictions or bans on government use of facial recognition technologies, California’s requirement for bots disclosure)
  • Funding research grants or projects for safe, explainable, or otherwise ethical AI (e.g. U.S. DARPA’s program for explainable AI)
  • Public procurement requirements include ethical criteria (e.g. Canada’s list of pre-qualified responsible AI suppliers, The AI-RFX Procurement)
  • Sector-specific policy guidance or frameworks (e.g. U.S. Draft Memorandum for the Heads of Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications)

Industry self-governance

  • Endorsing internal ethical principles (e.g. Microsoft’s AI Principles)
  • Endorsing external ethical principles (e.g. Asilomar Principles, Montreal Declaration for Responsible AI, OECD AI Principles, European Commissions High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI, etc.)
  • Voluntary labeling of AI products & services as complying with the above principles, e.g. EU Ethics Guidelines for Trustworthy AI
  • Internal ‘Ethics officers’ or Chief Ethics Officer
  • Technology advisory bodies/councils/committees comprised of in-house or external advisors (e.g. Microsoft AETHER Committee)
  • Reference for board of directors or C-Suite oversight (E.g. WEF Oversight Toolkit for Boards of Directors)
  • Mandatory ethics courses in-house
  • Ethics built into design in engineering projects
  • Technical tools to simplify mitigating risks (e.g. Interpret ML, The AI Fairness 360 Toolkit, The TensorFlow Privacy library)
  • Documentation about AI systems to list characteristics or benchmark evaluation of ML models: ‘model cards’ (e.g. Google), ‘nutrition labels’, Partnership on AI’s ABOUT ML or data sets e.g. ‘datasheets’ explaining data collection, processing and composition for datasets.
  • Reference to a checklist (e.g. The Assessment List of the Ethics Guidelines for Trustworthy AI) or models (e.g. The a3i Trust-in-AI framework) for trustworthy AI systems
  • Run red teaming exercises or offer bounties for safety, bias and security
  • Legal status (e.g. nonprofit, for profit, ‘capped profit’)
  • Publication norms for responsible AI (See Partnership on AI), open, or non-disclosure (See MIRI)

Third parties & standards associations

  • Technical standards or guidelines e.g. IEEE P7000™ Standards Series or ISO
  • Third party audits (e.g. independent audits of algorithmic systems)
  • Certifications for staff operationalizing AI systems (mandatory or voluntary) (e.g. IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS))
  • Technology review boards (external)
  • Toolkit or resources for advocates to audit and support transparency and accountability (e.g. AI Now Institute Algorithmic Accountability Policy Toolkit)
  • Insurance policies

Public accountability

  • Courses to educate the public about AI to better hold AI developers accountable (e.g. Finland’s Elements of AI course)
  • Brand reputation, fear of public backlash, loss of consumer trust, and negative press or social media
  • Positive press and praise for companies being ethical
  • Ethics coursework in computer science courses
  • Processes for employees for whistle-blowing or denying unethical projects

Decentralized and distributed technology solutions

  • Incentive mechanisms including based on cryptoeconomics

References

Compiled by Yolanda Lannquist based on presentations and publications by the author, Nicolas Miailhe, Jessica Cussins Newman, Professor Wendell Wallach, Professor Allan Dafoe, Professor Francesca Rossi, the AI Now Institute, among others.

Special acknowledgement for the forthcoming publication from which I’ve gathered several examples: Jessica Cussins Newman, 2020, AI Decision Points: Three Case Studies Explore Efforts to Operationalize AI Principles. UC Berkeley Center for Long-Term Cybersecurity White Paper Series.

For a list of AI ethics principles and guidelines by companies, governments, industry associations, civil society and more see Algorithm Watch’s AI Ethics Guidelines Global Inventory.

Brundage et al., 2020, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.

Cover photo from William Santo on Unsplash

Related resources

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!

Leadership transition announcement

Leadership transition announcement

Earlier this year, after an extensive search for The Future Society’s next Executive Director, the Board of Directors selected Nicolas Moës, former Director of EU AI Governance, to fill the role.

TFS feedback reflected in historic UN General Assembly resolution on AI

TFS feedback reflected in historic UN General Assembly resolution on AI

Earlier this year, we provided feedback on a draft UN General Assembly resolution on AI. Last week, an updated resolution that reflected our feedback, co-sponsored by more than 120 Member States, was adopted by consensus.

TFS joins U.S. NIST AI Safety Institute Consortium

TFS joins U.S. NIST AI Safety Institute Consortium

TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.

Towards Effective Governance of Foundation Models and Generative AI

Towards Effective Governance of Foundation Models and Generative AI

We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.

Launch event for the Journal of AI Law and Regulation (AIRe)

Launch event for the Journal of AI Law and Regulation (AIRe)

TFS is partnering with Lexxion Publisher to launch The Journal of AI Law and Regulation (AIRe), covering key legislative developments in AI governance and providing an impartial platform for discussing the role of law and regulation in managing the challenges and opportunities of AI's impact on society.