Skip to content

Main Insight

“To what extent should societies delegate to machines decisions that affect people?” This question permeates all discussions on the sweeping ascent of artificial intelligence. Sometimes, the answer seems self-evident.

A ‘Principled’ Artificial Intelligence could improve justice

October 3, 2017

If asked whether entirely autonomous, artificially intelligent judges should ever have the power to send humans to jail, most of us would recoil in horror at the idea. Our answer would be a firm “Never!”.

But assume that AI judges, devoid of biases or prejudices, could make substantially more equitable, consistent and fair systemwide decisions than humans could, nearly eliminating errors and inequities. Would (should?) our answer be different? What about entirely autonomous AI public defenders, capable of demonstrably superior results for their clients than their overworked and underpaid human counterparts? And finally, what about AI lawmakers, capable of designing optimal laws to meet key public policy objectives? Should we trust those AI caretakers with our well-being?

Fortunately, these questions, formulated as a binary choice, are not imminent. However, it would be a mistake to ignore their early manifestations in the increasingly common “hybrid intelligence” systems being used today that combine human and artificial intelligence. (It is worth noting that there is no unanimously accepted definition of “artificial intelligence”—or even of “intelligence” for that matter.)

Just the idea of inserting assessments made by machines into legal decision-making serves as an ominous warning. For example, whereas courts do not yet rely on AI in assigning guilt or innocence, several states use “risk assessment” algorithms when it comes to sentencing. In Wisconsin, a judge sentenced a man to six years in prison based in part on such a risk profile. This system’s algorithm remained secret. The defendant was not allowed to examine how the algorithm arrived at its assessment. No independent, scientifically sound evaluations of its efficacy were presented.

On what basis then did the judge choose to rely on it? Did he understand its decision-making pathways or its potential for error or biases or did he unduly trust it by virtue of its ostensible scientific basis or the glitter of its marketing? More broadly, are judges even competent to assess whether such algorithms are reliable in a particular instance? Even if a judge happened to be a computer scientist, could she assess the algorithm absent access to its internal workings or to sound scientific evidence establishing its effectiveness in the real-world application in which it is about to be used? And even if the algorithms are in fact effective and even if the judge is, in fact, competent to evaluate their effectiveness, is society as a whole equipped with the information and assurances it needs to place its trust in such machine-based systems?

Read the full article by Nicolas Economou here.

Nicolas Economou is the CEO of electronic discovery and information retrieval firm H5, a senior advisor to the AI Initiative of the Future Society at Harvard Kennedy School, and is an advocate of the application of scientific methods to electronic discovery.

 

Team members

Related resources

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.

TFS champions Regulatory Sandboxes in the EU AI Act

TFS champions Regulatory Sandboxes in the EU AI Act

The Future Society has been advocating for regulatory sandboxes to be implemented via the EU AI Act and designed a three-phase roll out program.

TFS refocuses vision, mission, and operational model for deeper impact

TFS refocuses vision, mission, and operational model for deeper impact

The Future Society convened in Lisbon, Portugal to mark the conclusion of an in-depth “strategic refocus” to determine what the organization will focus on for the next 3 years.

2021 Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

2021 Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

An international, cross-organizational dialogue on how to uphold the rule of law in the age of AI.

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

Launch of Global Online Course on Artificial Intelligence and the Rule of Law

The Future Society and UNESCO are pleased to announce open registration for a global course on AI's application and impact on the rule of law.