This research by Sino Esthappan from Northwestern University reveals the complexities of using algorithmic tools in the criminal justice system, specifically in pretrial risk assessments. Although these algorithms aim to assist judges by offering data-driven insights into whether defendants should be released or held in jail, Esthappan found that judges use them selectively and in varied ways influenced by human considerations, often driven by concerns over reputation and public perception. This selective usage shows that algorithms aren’t replacing human bias or unpredictability but instead, in some cases, are legitimizing and concealing it.
These tools work by analyzing defendants’ backgrounds against a vast database of past cases to calculate a “risk” score, which ideally should guide judges in assessing release risks. However, Esthappan’s study suggests that judges often rely on these scores to justify their own decisions, sometimes using them as an added layer of authority when making tough calls—especially when high-stakes cases like intimate partner violence are involved. Judges have also been found to lean on scores to expedite lower-stakes cases, often where the reputational risks are lower.
Despite the promise of reducing human bias, Esthappan’s findings indicate that the use of these tools may actually mask bias rather than eliminate it, with judges often disregarding scores when they conflict with personal judgments or societal expectations. This trend, in Esthappan’s view, reflects deeper structural issues within the pretrial system, where limited time and information force judges into making quick, consequential decisions based on incomplete data. As University of Pennsylvania criminology professor Richard Berk suggests, while these tools aim to raise the accuracy of judicial decisions, their effectiveness is limited by the data they’re based on—data that can be racially skewed and historically biased.
Esthappan’s research highlights critical concerns surrounding the fairness of pretrial detention, particularly in light of how decisions about a defendant’s freedom are made before any trial occurs. Public defenders and scholars argue that the current system, which often relies heavily on risk assessments and judges’ subjective interpretations, may inherently disadvantage certain groups, particularly those from marginalized backgrounds.
The reliance on algorithms and risk scores, while intended to mitigate human bias, can inadvertently perpetuate existing inequities rooted in the criminal justice system. For instance, these tools are built on historical data that may reflect past discriminatory practices, leading to biased outcomes for individuals who are already disproportionately affected by the system.
Esthappan suggests that the issues with pretrial practices are more deeply entrenched than just algorithmic flaws or individual biases. The very framework of pretrial detention—where individuals are judged based on predictions rather than evidence of wrongdoing—poses ethical and procedural challenges. The current model can result in unjust detention for individuals who pose little to no risk, disrupting their lives and impacting their families and communities.
To address these systemic issues, comprehensive reforms may be necessary. This could involve reevaluating the criteria for pretrial detention, improving the quality of data used in risk assessments, enhancing transparency in decision-making processes, and exploring alternatives to detention that prioritize fairness and justice over punitive measures. By focusing on the underlying structures that govern pretrial practices, stakeholders can work toward a more equitable system that respects individuals’ rights and freedoms while maintaining public safety.