A French-American dialogue on AI & privacy, trust, and human oversight in health and well-being

November 10, 2020
Virtual

This event was organized on November 9 & 10, by the Georgia Institute of Technology, the Consulate General of France in Atlanta, the Emory University Center for Ethics, the Georgia Tech Ethics, Technology, and Human Interaction Center , the University of Nantes “Droit et Changement Social”(Law and Social Change) Research Center and DataSanté Research ProgramSKEMA Business School, and French Tech Raleigh – Research Triangle, with the support of the Atlanta Office of the Cultural Services of the Embassy of France in the United States and the Office for Science and Technology of the Embassy of France in the United States.

Artificial intelligence (AI) models are now capable of collecting and analyzing enormously large datasets in ways that are challenging fundamental values embraced within Europe and the United States. Holding much promise in terms of increased productivity, efficiency, and quality time, AI programs and algorithms could function as an assistant, a peer, a manager, or even as a friend. Indeed, they might be so revolutionary that no one, regardless of whether they are consumers, citizens, patients, operators, or stakeholders, will remain unaffected.

The power of AI is such that it may jeopardize what it means to be human, whether people retain freedom of choice, and AI might reshape the relationship between humans and technology in society. The ethical issues emerging from AI are complex and quickly evolving. What follows is that identifying and implementing appropriate solutions can be difficult.

The approaches taken by France, the European Union and the United States to address these ethical issues are currently being defined and the governments are, in 2020, still considering options to maximize the potential of AI and big data while mitigating potential ethical harms.

Nicolas Miailhe, Founder and President of TFS managed with Francesca Rossi, IBM Global Ethics Leader three short participatory workshops focused on the specific challenges of human oversight. The conversation was based on a practical case-study (Machine learning software to predict breast cancer reoccurrences; details enclosed). Relying on a functional, ethical, operational, and legal analysis explored concrete questions of liability, competence, accountability, explainability. It sought to delineate the ethical and legal boundaries of support to decision system (e.g. vis à vis automatic decision making systems) deployed in the field of healthcare.