Skip to content

Main Insight

A TFS article published in Biometric Technology Today analyzes a survey of policymakers’ and company representatives’ preferences of ethical principles across a range of biometric technology use cases.

Article published: Bridging the trust gaps in biometrics

April 27, 2021

Building on The Future Society’s earlier research with EY on Bridging AI’s trust gaps, TFS researchers Samuel Curtis, Delfina Belli, Sacha Alanoca, Adriana Bora, Nicolas Miailhe and Yolanda Lannquist collaborated on a feature article published in Biometric Technology Today, titled “Bridging the trust gaps in biometrics.”

This research analyzes a survey of policymakers’ and company representatives’ preferences of ethical principles (eg. fairness avoiding bias, privacy and data rights, and transparency) across a range of biometric technology use cases: behavioral modification, facial recognition check-ins, home virtual voice assistants, human emotion analysis, and law enforcement surveillance.

The survey results suggest that policymakers and companies have markedly different views about the main ethical issues pertaining to biometric applications. For example, policymakers demonstrate agreement with regards to which ethical principles they see as being most relevant when deploying biometric tools. In terms of using facial recognition technology, for example, they consistently (and by a wide margin) feel that the two most important ethical principles are ‘fairness and avoiding bias’ and ‘privacy and data rights’ (Figure 1a). This reflects widespread concerns about privacy, and the potential risk of bias when using facial ID due to its differential performance across races and genders (Buolamwini et al.).

In contrast to policymakers, there seems to be little consensus among companies about the most important ethical concerns across the different types of biometric tools. For example, in relation to both facial recognition checks and behavioral modification technologies, these respondents demonstrated only a marginal preference towards focusing on the issues of ‘privacy and data rights’ and ‘safety and security’ (Figure 1b).

Aligning on these priorities will itself be challenging because of another widening gap—a ‘trust deficit’ between companies and policymakers. As evidenced by our survey, policymakers don’t trust the intentions of companies: almost six in 10 company respondents (59%) agreed that ‘self-regulation by industry is better than government regulation of AI’ while a similar proportion (63%) of policymakers disagreed (Figure 2). Likewise, while 59% of company respondents agreed that ‘companies invest in ethical AI even if it reduces profits’, only 21% of policymakers shared that view (and 49% disagreed). Meanwhile, 72% of companies felt that ‘companies use AI to benefit consumers and society’, but only 44% of policymakers agreed.

Bridging this gap will not be easy—trust is easy to lose and much more difficult and time-consuming to regain. To address the fears that AI-enabled biometric tools are generating, we argue that firms should take proactive measures that demonstrate to consumers and policymakers that they are not out of touch with their ethical priorities (for more, see our article in the Emerj newsletter on the role of corporate leaders).

The article also lays out in detail how independent third parties, such as interdisciplinary research centers and expert-led algorithmic auditing firms, could serve in bridging these gaps to provide assurances to both companies and policymakers. These entities could also offer assurance to an apprehensive public, by ensuring compliance with regulations and providing audits and certifications in line with ethical considerations and norms. To this end, The Future Society has recently launched a research endeavor: Independent Auditing & Certification Ecosystem Design. Learn more here.

Related resources

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.

TFS champions Regulatory Sandboxes in the EU AI Act

TFS champions Regulatory Sandboxes in the EU AI Act

The Future Society has been advocating for regulatory sandboxes to be implemented via the EU AI Act and designed a three-phase roll out program.

Stakeholder consultation workshops drive insights for National AI Strategies in Tunisia and Ghana

Stakeholder consultation workshops drive insights for National AI Strategies in Tunisia and Ghana

In May 2022, The Future Society (TFS) co-led stakeholder consultation workshops in Tunis and Accra to support the development of Tunisia’s and Ghana’s national AI strategies. 

TFS refocuses vision, mission, and operational model for deeper impact

TFS refocuses vision, mission, and operational model for deeper impact

The Future Society convened in Lisbon, Portugal to mark the conclusion of an in-depth “strategic refocus” to determine what the organization will focus on for the next 3 years.

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

The Future Society and UNESCO are pleased to announce open registration for a global course on AI's application and impact on the rule of law.