Main Insight
“To what extent should societies delegate to machines decisions that affect people?” This question permeates all discussions on the sweeping ascent of artificial intelligence. Sometimes, the answer seems self-evident.
A ‘Principled’ Artificial Intelligence could improve justice
October 3, 2017
If asked whether entirely autonomous, artificially intelligent judges should ever have the power to send humans to jail, most of us would recoil in horror at the idea. Our answer would be a firm “Never!”.
But assume that AI judges, devoid of biases or prejudices, could make substantially more equitable, consistent and fair systemwide decisions than humans could, nearly eliminating errors and inequities. Would (should?) our answer be different? What about entirely autonomous AI public defenders, capable of demonstrably superior results for their clients than their overworked and underpaid human counterparts? And finally, what about AI lawmakers, capable of designing optimal laws to meet key public policy objectives? Should we trust those AI caretakers with our well-being?
Fortunately, these questions, formulated as a binary choice, are not imminent. However, it would be a mistake to ignore their early manifestations in the increasingly common “hybrid intelligence” systems being used today that combine human and artificial intelligence. (It is worth noting that there is no unanimously accepted definition of “artificial intelligence”—or even of “intelligence” for that matter.)
Just the idea of inserting assessments made by machines into legal decision-making serves as an ominous warning. For example, whereas courts do not yet rely on AI in assigning guilt or innocence, several states use “risk assessment” algorithms when it comes to sentencing. In Wisconsin, a judge sentenced a man to six years in prison based in part on such a risk profile. This system’s algorithm remained secret. The defendant was not allowed to examine how the algorithm arrived at its assessment. No independent, scientifically sound evaluations of its efficacy were presented.
On what basis then did the judge choose to rely on it? Did he understand its decision-making pathways or its potential for error or biases or did he unduly trust it by virtue of its ostensible scientific basis or the glitter of its marketing? More broadly, are judges even competent to assess whether such algorithms are reliable in a particular instance? Even if a judge happened to be a computer scientist, could she assess the algorithm absent access to its internal workings or to sound scientific evidence establishing its effectiveness in the real-world application in which it is about to be used? And even if the algorithms are in fact effective and even if the judge is, in fact, competent to evaluate their effectiveness, is society as a whole equipped with the information and assurances it needs to place its trust in such machine-based systems?
Read the full article by Nicolas Economou here.
Nicolas Economou is the CEO of electronic discovery and information retrieval firm H5, a senior advisor to the AI Initiative of the Future Society at Harvard Kennedy School, and is an advocate of the application of scientific methods to electronic discovery.