The scattershot development of independent auditing and certification for AI systems and organizations has jeopardized precisely what these schemes intended to ensure: confidence and trust. It is time to consider what a unified “end-state” ecosystem might look like, and build support across key players towards this vision.
Project launch: Independent Auditing & Certification Ecosystem Design
March 25, 2021
The rapid development of AI is prompting acute ethical and consumer protection issues, from potential bias in algorithmic recruiting decisions to the privacy implications of health monitoring applications. As with other high-risk technologies and industries, such as aerospace, medical devices, and finance, independent auditing and certification (hereafter “IA&C”) is considered a critical component of the governance of AI. Qualified third parties capable of assessing the design and operation of AI systems could ensure consumers and governments that AI systems and organizations operate in accordance with laws, ethical norms, and AI principles—establishing an “infrastructure of trust” between AI developers, governments, and the general public. Much like physical infrastructure, a robust IA&C ecosystem infrastructure will serve a fundamental and transformative role, by instating mechanisms to hold AI-developing organizations accountable for the social, ethical, and ecological risks associated with their technology.
Towards this end, corporations, consulting agencies, civil society organizations, and independent researchers alike have been studiously developing AI IA&C schemes to standardize the safety, fairness, accountability, and transparency (among other aspects) across AI systems and organizations. As AI IA&C dialogues and development have progressed, The Future Society (TFS) has been engaged with its stakeholders, working directly with relevant multilateral organizations, standardization bodies, auditing firms, certification firms, non-profits, and technical experts, to both steer and gauge its development.
Through these various engagements, it has become evident that the present stagnation of AI IA&C uptake is not for lack of theoretical auditing/certification schemes, but reflects systemic uncertainty with regard to the direction that the ecosystem is heading. The scattershot evolution has created an ecosystem that is altogether discordant, lacking a unified vision of what a functional and responsible AI IA&C ecosystem—its third-party institutions, key value drivers, and governance—will look like.
Decision-makers in the EU, the US, OECD, and GPAI are now trying to resolve some of this ambiguity through their governance mechanisms (regulations, policies, standards, and guidelines). The forthcoming EU Regulation on Artificial Intelligence, for instance, will elucidate key regulatory elements. At this nascent juncture, stakeholders have an opportunity to shape the ecosystem’s maturation.
The Future Society intends to launch a multi-phase participatory ecosystem design project, which will involve in-depth landscape mapping, vision-building workshops with stakeholders, and developing actionable recommendations for stakeholders to co-shape a desirable IA&C ecosystem.
We believe this research endeavor would provide stakeholders a clearer understanding of the fault lines between IA&C theory and adoption, allowing stakeholders to focus on bridging these gaps and eliminating gridlock.
As a whole, a more intentional and collectively-designed ecosystem will serve to preempt the risks associated with stochastic IA&C development. It will reconcile differences between stakeholders’ interests. When successful, it will ensure that IA&C schemes provide safety to consumers and the general public, guidance and credibility to AI-developers organizations, and governments oversight over auditing and certification standards.
This initiative commenced on March 25th, 2021 with a Chatham House ideation session with industry, government, academic, and civil society leaders.
At the present time, we are seeking to identify interested collaborators and sponsors. These partners will be key advisors throughout the development of the project, helping us select stakeholders and steer the scope of the research. They will be featured in our social media posts related to the project and in any external-facing documents (such as the research article and workshop summaries) that are published, which we believe will be very influential in AI auditing and certification academia and policymaking. Beyond these benefits, by being a part of our discussions and informed by our research, they will be better suited to respond to the characteristics and dynamics of the auditing and certification landscape. Depending on the entity and our partnership, we may be able to co-develop pilot projects to test/validate our findings.
To express interest, please contact email@example.com.