Article published: Bridging the trust gaps in biometrics
A TFS article published in Biometric Technology Today analyzes a survey of policymakers’ and company representatives’ preferences of ethical principles across a range of biometric technology use cases.
Building on The Future Society’s earlier research with EY on Bridging AI’s trust gaps, TFS researchers Samuel Curtis, Delfina Belli, Sacha Alanoca, Adriana Bora, Nicolas Miailhe and Yolanda Lannquist collaborated on a feature article published in Biometric Technology Today, titled “Bridging the trust gaps in biometrics.”
This research analyzes a survey of policymakers’ and company representatives’ preferences of ethical principles (eg. fairness avoiding bias, privacy and data rights, and transparency) across a range of biometric technology use cases: behavioral modification, facial recognition check-ins, home virtual voice assistants, human emotion analysis, and law enforcement surveillance.
The survey results suggest that policymakers and companies have markedly different views about the main ethical issues pertaining to biometric applications. For example, policymakers demonstrate agreement with regards to which ethical principles they see as being most relevant when deploying biometric tools. In terms of using facial recognition technology, for example, they consistently (and by a wide margin) feel that the two most important ethical principles are ‘fairness and avoiding bias’ and ‘privacy and data rights’ (Figure 1a). This reflects widespread concerns about privacy, and the potential risk of bias when using facial ID due to its differential performance across races and genders (Buolamwini et al.).
In contrast to policymakers, there seems to be little consensus among companies about the most important ethical concerns across the different types of biometric tools. For example, in relation to both facial recognition checks and behavioral modification technologies, these respondents demonstrated only a marginal preference towards focusing on the issues of ‘privacy and data rights’ and ‘safety and security’ (Figure 1b).
Aligning on these priorities will itself be challenging because of another widening gap—a ‘trust deficit’ between companies and policymakers. As evidenced by our survey, policymakers don’t trust the intentions of companies: almost six in 10 company respondents (59%) agreed that ‘self-regulation by industry is better than government regulation of AI’ while a similar proportion (63%) of policymakers disagreed (Figure 2). Likewise, while 59% of company respondents agreed that ‘companies invest in ethical AI even if it reduces profits’, only 21% of policymakers shared that view (and 49% disagreed). Meanwhile, 72% of companies felt that ‘companies use AI to benefit consumers and society’, but only 44% of policymakers agreed.
Bridging this gap will not be easy—trust is easy to lose and much more difficult and time-consuming to regain. To address the fears that AI-enabled biometric tools are generating, we argue that firms should take proactive measures that demonstrate to consumers and policymakers that they are not out of touch with their ethical priorities (for more, see our article in the Emerj newsletter on the role of corporate leaders).
The article also lays out in detail how independent third parties, such as interdisciplinary research centers and expert-led algorithmic auditing firms, could serve in bridging these gaps to provide assurances to both companies and policymakers. These entities could also offer assurance to an apprehensive public, by ensuring compliance with regulations and providing audits and certifications in line with ethical considerations and norms. To this end, The Future Society has recently launched a research endeavor: Independent Auditing & Certification Ecosystem Design. Learn more here.