This article was published by The Future Society in Advisory Services, Policy Research, The AI Initiative on July 22, 2020

It contains tags related to , ,

Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies

Main insight

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to be addressed for the trustworthy adoption of AI across sectors.

Advisory ServicesPolicy ResearchThe AI Initiative
July 22, 2020
3 min read

The COVID-19 global pandemic has highlighted complex health, economic, and ethical trade-offs, and inspired unprecedented coordination between policymakers and companies. Several AI-enabled solutions have been deployed to deal with the crisis, ranging from innovative algorithms to accelerate the discovery of vaccines and treatments, to contact tracing applications and temperature scanning drones. However, without an effective governance framework and alignment between companies and policymakers, these technologies have the potential to undermine our fundamental human rights and civil liberties. We therefore need to bridge AI’s trust gaps and build a common understanding among policy and industry leaders about the ethical risks raised by AI applications during the pandemic and beyond. 

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank) and Thought Leadership Consulting, A Euromoney Institutional Investor company. Over the course of a year, our team analyzed AI ethical guidelines and surveyed leading companies and policymakers’ perceptions of AI applications across key sectors such as healthcare, aviation, law, retail, and financial services. 

The report was co-authored by The Future Society team: Nicolas Miailhe, Sacha Alanoca, Adriana Bora, Yolanda Lannquist and Arohi Jain in collaboration with the EYQ (Ernst & Young’s Think Tank) team: Gil Forer, Gautam Jaggi, Prianka Srinivasan and Ben Falk.

We asked policymakers and companies to identify the most important ethical principles when regulating a range of AI applications, and found divergent priorities between them across use cases. Ethical misalignments generally concentrate in four areas: fairness and avoiding bias, innovation, data access, and privacy and data rights.

Top insight 1: Policymakers have a clear vision of AI ethical risks 

We are at a critical transition point in AI governance, as the locus of activity shifts from articulating ethical principles to implementing them. Policymakers have achieved consensus on the ethical principles they intend to prioritize and there are signs of increasing collaboration across jurisdictions. Policymakers’ alignment is evident in the survey data. On the use of AI for facial recognition check-ins in airports, hotels and banks, for example, policymakers show a clear ethical vision. They consistently rate “fairness and avoiding bias” and “privacy and data rights” as the two most important principles, and doing so by a wide margin (Exhibit 1). Their consensus on the key ethical issues reveals a thorough understanding of the risks raised by AI applications.

Top insight 2: Companies and policymakers have different ethical priorities

On the other hand, companies showed weaker consensus. Their responses were fairly evenly distributed across all ethical principles. Overall, companies seem to focus on the principles prioritized by existing regulations such as GDPR (e.g, privacy and cybersecurity) rather than on emerging issues that will become critical in the age of AI (e.g. explainability, fairness and non-discrimination) (Exhibit 2).

Exhibit 2: 

Top insight 3:  Varying expectations about who will lead AI governance

In addition to ethical misalignments, the survey reveals a large expectation gap when asked who will lead AI governance efforts. 84% of policymakers expect state actors to fulfill this role while 38% of companies believe the private sector will conduct any rule-making process (Exhibit 3).

Exhibit 3:

Interested in other key insights? You can read our full report here