Skip to content

Main Insight

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to

Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies

July 22, 2020

The COVID-19 global pandemic has highlighted complex health, economic, and ethical trade-offs, and inspired unprecedented coordination between policymakers and companies. Several AI-enabled solutions have been deployed to deal with the crisis, ranging from innovative algorithms to accelerate the discovery of vaccines and treatments, to contact tracing applications and temperature scanning drones. However, without an effective governance framework and alignment between companies and policymakers, these technologies have the potential to undermine our fundamental human rights and civil liberties. We therefore need to bridge AI’s trust gaps and build a common understanding among policy and industry leaders about the ethical risks raised by AI applications during the pandemic and beyond. 

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank) and Thought Leadership Consulting, A Euromoney Institutional Investor company. Over the course of a year, our team analyzed AI ethical guidelines and surveyed leading companies and policymakers’ perceptions of AI applications across key sectors such as healthcare, aviation, law, retail, and financial services. 

The report was co-authored by The Future Society team: Nicolas Miailhe, Sacha Alanoca, Adriana Bora, Yolanda Lannquist and Arohi Jain in collaboration with the EYQ (Ernst & Young’s Think Tank) team: Gil Forer, Gautam Jaggi, Prianka Srinivasan and Ben Falk.

We asked policymakers and companies to identify the most important ethical principles when regulating a range of AI applications, and found divergent priorities between them across use cases. Ethical misalignments generally concentrate in four areas: fairness and avoiding bias, innovation, data access, and privacy and data rights.

Top insight 1: Policymakers have a clear vision of AI ethical risks 

We are at a critical transition point in AI governance, as the locus of activity shifts from articulating ethical principles to implementing them. Policymakers have achieved consensus on the ethical principles they intend to prioritize and there are signs of increasing collaboration across jurisdictions. Policymakers’ alignment is evident in the survey data. On the use of AI for facial recognition check-ins in airports, hotels and banks, for example, policymakers show a clear ethical vision. They consistently rate “fairness and avoiding bias” and “privacy and data rights” as the two most important principles, and doing so by a wide margin (Exhibit 1). Their consensus on the key ethical issues reveals a thorough understanding of the risks raised by AI applications.

Top insight 2: Companies and policymakers have different ethical priorities

On the other hand, companies showed weaker consensus. Their responses were fairly evenly distributed across all ethical principles. Overall, companies seem to focus on the principles prioritized by existing regulations such as GDPR (e.g, privacy and cybersecurity) rather than on emerging issues that will become critical in the age of AI (e.g. explainability, fairness and non-discrimination) (Exhibit 2).

Exhibit 2: 

Top insight 3:  Varying expectations about who will lead AI governance

In addition to ethical misalignments, the survey reveals a large expectation gap when asked who will lead AI governance efforts. 84% of policymakers expect state actors to fulfill this role while 38% of companies believe the private sector will conduct any rule-making process (Exhibit 3).

Exhibit 3:

Interested in other key insights? You can read our full report here

Related resources

TFS supports the approval of the EU AI Act

TFS supports the approval of the EU AI Act

The Future Society urges European Union Member States and the European Parliament to approve the EU AI Act.

Contribution to the G7 Hiroshima AI Process Code of Conduct

Contribution to the G7 Hiroshima AI Process Code of Conduct

TFS contributed to the United States and European Union’s public consultations on the G7 Hiroshima Artificial Intelligence Process Guiding Principles and Code of Conduct.

Our 2023 Highlights

Our 2023 Highlights

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.