Artificial Intelligence: Problematic in the Criminal Law Sphere?

When the writer of a recent media piece focuses on “the implications of algorithmic risk assessments” in the criminal justice realm, he is addressing an issue that is immediately troubling and with real-world ramifications.

What most concerns Jason Tashea, the creator of a company that centrally addresses the linkage between technology and criminal law outcomes, is a fundamental point he made in the publication Wired earlier this month.

And that is this: Judges across the country, and certainly in California, are increasingly relying upon software that replicates human thinking to help them more quickly make decisions regarding matters such as flight risk and proper sentencing outcomes.

Indeed, ever-evolving algorithms that are perhaps the cornerstones in what is commonly referred to as “artificial intelligence” (AI) are growing progressively sophisticated. These networks of computer-based processes and formulas, notes Tashea, are “meant to act like the human brain [and] cannot be transparent because of their very nature.”

That is, no judge — or virtually any other person — can reasonably discuss the “reasoning” that some AI algorithms engage in to reach conclusions that can spell the difference, say, between a criminal defendant receiving a probationary term versus a lengthy prison sentence, respectively.

The process by which an algorithm works (thinks) “is hidden and always changing,” Tashea points out, which spikes the risk “of limiting a judge’s ability to render a fully informed decision.”

And that is troubling at a deep level, Wired asserts, because a loss of transparency undermines judicial oversight and can result in flatly flawed case outcomes.

Use of algorithms in important justice-related matters in the absence of true oversight regarding how they work “risks eroding the rule of law and diminishing individual rights,” contends Tashea.

Should American courts “blindly allow the march of technology to go forward,” queries Tashea, or should they call a time out and rethink their reliance on AI tools and take steps to better ensure that humans — and not computer-driven processes — are in control regarding algorithms?

As far as Wired is concerned, there is a clear answer to that: We need to pull back from a too-quick reliance on AI assists and fashion reasoned standards and rules for how state-of-the-art tech tools will be employed in the criminal justice administration.

And judges must fully understand the reasoning behind algorithm conclusions regarding important criminal law outcomes.

Categories: 
Related Posts
  • Juvenile Rehabilitation Programs in California: Promoting Rehabilitation and Reintegration
  • California's Drug Courts: Alternative Approaches to Rehabilitation
  • Should You Talk to the Police Without a Lawyer Present?
/