Skip to content

Main Insight

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to

Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies

July 22, 2020

The COVID-19 global pandemic has highlighted complex health, economic, and ethical trade-offs, and inspired unprecedented coordination between policymakers and companies. Several AI-enabled solutions have been deployed to deal with the crisis, ranging from innovative algorithms to accelerate the discovery of vaccines and treatments, to contact tracing applications and temperature scanning drones. However, without an effective governance framework and alignment between companies and policymakers, these technologies have the potential to undermine our fundamental human rights and civil liberties. We therefore need to bridge AI’s trust gaps and build a common understanding among policy and industry leaders about the ethical risks raised by AI applications during the pandemic and beyond. 

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank) and Thought Leadership Consulting, A Euromoney Institutional Investor company. Over the course of a year, our team analyzed AI ethical guidelines and surveyed leading companies and policymakers’ perceptions of AI applications across key sectors such as healthcare, aviation, law, retail, and financial services. 

The report was co-authored by The Future Society team: Nicolas Miailhe, Sacha Alanoca, Adriana Bora, Yolanda Lannquist and Arohi Jain in collaboration with the EYQ (Ernst & Young’s Think Tank) team: Gil Forer, Gautam Jaggi, Prianka Srinivasan and Ben Falk.

We asked policymakers and companies to identify the most important ethical principles when regulating a range of AI applications, and found divergent priorities between them across use cases. Ethical misalignments generally concentrate in four areas: fairness and avoiding bias, innovation, data access, and privacy and data rights.

Top insight 1: Policymakers have a clear vision of AI ethical risks 

We are at a critical transition point in AI governance, as the locus of activity shifts from articulating ethical principles to implementing them. Policymakers have achieved consensus on the ethical principles they intend to prioritize and there are signs of increasing collaboration across jurisdictions. Policymakers’ alignment is evident in the survey data. On the use of AI for facial recognition check-ins in airports, hotels and banks, for example, policymakers show a clear ethical vision. They consistently rate “fairness and avoiding bias” and “privacy and data rights” as the two most important principles, and doing so by a wide margin (Exhibit 1). Their consensus on the key ethical issues reveals a thorough understanding of the risks raised by AI applications.

Top insight 2: Companies and policymakers have different ethical priorities

On the other hand, companies showed weaker consensus. Their responses were fairly evenly distributed across all ethical principles. Overall, companies seem to focus on the principles prioritized by existing regulations such as GDPR (e.g, privacy and cybersecurity) rather than on emerging issues that will become critical in the age of AI (e.g. explainability, fairness and non-discrimination) (Exhibit 2).

Exhibit 2: 

Top insight 3:  Varying expectations about who will lead AI governance

In addition to ethical misalignments, the survey reveals a large expectation gap when asked who will lead AI governance efforts. 84% of policymakers expect state actors to fulfill this role while 38% of companies believe the private sector will conduct any rule-making process (Exhibit 3).

Exhibit 3:

Interested in other key insights? You can read our full report here

Related resources

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!

Leadership transition announcement

Leadership transition announcement

Earlier this year, after an extensive search for The Future Society’s next Executive Director, the Board of Directors selected Nicolas Moës, former Director of EU AI Governance, to fill the role.

TFS feedback reflected in historic UN General Assembly resolution on AI

TFS feedback reflected in historic UN General Assembly resolution on AI

Earlier this year, we provided feedback on a draft UN General Assembly resolution on AI. Last week, an updated resolution that reflected our feedback, co-sponsored by more than 120 Member States, was adopted by consensus.

TFS joins U.S. NIST AI Safety Institute Consortium

TFS joins U.S. NIST AI Safety Institute Consortium

TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.

Towards Effective Governance of Foundation Models and Generative AI

Towards Effective Governance of Foundation Models and Generative AI

We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.

Launch event for the Journal of AI Law and Regulation (AIRe)

Launch event for the Journal of AI Law and Regulation (AIRe)

TFS is partnering with Lexxion Publisher to launch The Journal of AI Law and Regulation (AIRe), covering key legislative developments in AI governance and providing an impartial platform for discussing the role of law and regulation in managing the challenges and opportunities of AI's impact on society.