Skip to content

Main Insight

The scattershot development of independent auditing and certification for AI systems and organizations has jeopardized precisely what these schemes intended to ensure: confidence and trust. It is time to consider what a unified “end-state” ecosystem might look like, and build support across key players towards this vision.

Project launch: Independent Auditing & Certification Ecosystem Design

March 25, 2021

The rapid development of AI is prompting acute ethical and consumer protection issues, from potential bias in algorithmic recruiting decisions to the privacy implications of health monitoring applications. As with other high-risk technologies and industries, such as aerospace, medical devices, and finance, independent auditing and certification (hereafter “IA&C”) is considered a critical component of the governance of AI. Qualified third parties capable of assessing the design and operation of AI systems could ensure consumers and governments that AI systems and organizations operate in accordance with laws, ethical norms, and AI principles—establishing an “infrastructure of trust” between AI developers, governments, and the general public. Much like physical infrastructure, a robust IA&C ecosystem infrastructure will serve a fundamental and transformative role, by instating mechanisms to hold AI-developing organizations accountable for the social, ethical, and ecological risks associated with their technology.

Towards this end, corporations, consulting agencies, civil society organizations, and independent researchers alike have been studiously developing AI IA&C schemes to standardize the safety, fairness, accountability, and transparency (among other aspects) across AI systems and organizations. As AI IA&C dialogues and development have progressed, The Future Society (TFS) has been engaged with its stakeholders, working directly with relevant multilateral organizations, standardization bodies, auditing firms, certification firms, non-profits, and technical experts, to both steer and gauge its development.

Through these various engagements, it has become evident that the present stagnation of AI IA&C uptake is not for lack of theoretical auditing/certification schemes, but reflects systemic uncertainty with regard to the direction that the ecosystem is heading. The scattershot evolution has created an ecosystem that is altogether discordant, lacking a unified vision of what a functional and responsible AI IA&C ecosystem—its third-party institutions, key value drivers, and governance—will look like.

Decision-makers in the EU, the US, OECD, and GPAI are now trying to resolve some of this ambiguity through their governance mechanisms (regulations, policies, standards, and guidelines). The forthcoming EU Regulation on Artificial Intelligence, for instance, will elucidate key regulatory elements. At this nascent juncture, stakeholders have an opportunity to shape the ecosystem’s maturation.

AGENDA

The Future Society intends to launch a multi-phase participatory ecosystem design project, which will involve in-depth landscape mapping, vision-building workshops with stakeholders, and developing actionable recommendations for stakeholders to co-shape a desirable IA&C ecosystem.

We believe this research endeavor would provide stakeholders a clearer understanding of the fault lines between IA&C theory and adoption, allowing stakeholders to focus on bridging these gaps and eliminating gridlock.

As a whole, a more intentional and collectively-designed ecosystem will serve to preempt the risks associated with stochastic IA&C development. It will reconcile differences between stakeholders’ interests. When successful, it will ensure that IA&C schemes provide safety to consumers and the general public, guidance and credibility to AI-developers organizations, and governments oversight over auditing and certification standards.

This initiative commenced on March 25th, 2021 with a Chatham House ideation session with industry, government, academic, and civil society leaders.

At the present time, we are seeking to identify interested collaborators and sponsors. These partners will be key advisors throughout the development of the project, helping us select stakeholders and steer the scope of the research. They will be featured in our social media posts related to the project and in any external-facing documents (such as the research article and workshop summaries) that are published, which we believe will be very influential in AI auditing and certification academia and policymaking. Beyond these benefits, by being a part of our discussions and informed by our research, they will be better suited to respond to the characteristics and dynamics of the auditing and certification landscape. Depending on the entity and our partnership, we may be able to co-develop pilot projects to test/validate our findings.

To express interest, please contact samuel.curtis@thefuturesociety.org.

Related resources

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

TFS champions Regulatory Sandboxes in the EU AI Act

TFS champions Regulatory Sandboxes in the EU AI Act

The Future Society has been advocating for regulatory sandboxes to be implemented via the EU AI Act and designed a three-phase roll out program.

Stakeholder consultation workshops drive insights for National AI Strategies in Tunisia and Ghana

Stakeholder consultation workshops drive insights for National AI Strategies in Tunisia and Ghana

In May 2022, The Future Society (TFS) co-led stakeholder consultation workshops in Tunis and Accra to support the development of Tunisia’s and Ghana’s national AI strategies. 

TFS refocuses vision, mission, and operational model for deeper impact

TFS refocuses vision, mission, and operational model for deeper impact

The Future Society convened in Lisbon, Portugal to mark the conclusion of an in-depth “strategic refocus” to determine what the organization will focus on for the next 3 years.

2021 Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

2021 Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

An international, cross-organizational dialogue on how to uphold the rule of law in the age of AI.

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

The Future Society and UNESCO are pleased to announce open registration for a global course on AI's application and impact on the rule of law.