The criminal legal system is grappling with using artificial intelligence (“AI”) to balance public safety and reduce mass incarceration. One controversial use of AI is predicting an individual’s likelihood of reoffending, also known as recidivism. These risk assessment tools use data-driven patterns based on demographic information to help practitioners decide on critical issues like bail or parole. These decisions often consider an individual’s likelihood of recidivating. While AI can offer accurate predictions, enhance efficiency, and help practitioners make well-informed decisions, AI still raises concerns regarding bias.
AI designed to predict recidivism relies on large volumes of data on historical criminal records. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a well-known case management tool that uses AI to assess the likelihood of an individual recidivating. COMPAS uses several factors, such as the individual’s criminal record, signs of delinquent behavior, and age at first arrest to generate risk scores that parole boards and judges can consider in their decision-making process. Several jurisdictions, such as New York State, Wisconsin, and California, use COMPAS. New York State has recognized the tool as successful and released a report emphasizing the effectiveness and accuracy of COMPAS. Some studies even purport that these tools are better at predicting recidivism than humans. In a 2020 study, researchers found COMPAS was 89% accurate at predicting recidivism, while humans were only 60% accurate. This suggests AI could be beneficial in streamlining sentencing and parole decisions and helping determine how to allocate rehabilitation resources better.
However, AI bias has been a longstanding area of concern in the criminal legal system, particularly when it comes to the potential for bias and inflexibility. For example, in a 2016 study, researchers found grave disparities in the way COMPAS scored individuals. The study showed Black defendants were more likely to be erroneously classified as high-risk in comparison to White defendants even when the latter were actually equally likely to reoffend. AI relies on historical data when creating its assessments. This data oftentimes reflects a history of discriminatory practices within the criminal legal system, which contributes to the bias that AI can display in its assessments. So, while AI may be more accurate, making predictions based on biased data only further perpetuates those biases. When AI overestimates the likelihood of recidivating for certain groups, it poses a direct threat to an individual’s fate in the criminal legal system. An individual could face harsher punishments, be denied bail or parole, and/or receive less access to rehabilitative programs. Over-penalization of some groups based on biased predictions further promotes injustice and undermines the goal of reducing recidivism.
Despite the risks, AI holds significant potential to enhance rehabilitation efforts. Rehabilitation plans can improve an individual’s “behaviors, skills, mental health, social functioning, and access to education and employment.” Improving these aspects of an individual’s life can reduce the likelihood of that individual recidivating. States such as Iowa, South Carolina, Tennessee, and Virginia reported seeing lower recidivism rates due in part to rehabilitative programs. In Virginia, for example, there was a 12% recidivism rate among incarcerated individuals who participated in career and academic programs. The characteristics that relate to an individual’s likelihood to engage in criminal behavior can vary based on the individual; these characteristics are called criminogenic factors. AI can assess the criminogenic factors of each individual and create a comprehensive rehabilitation plan tailored to the root causes of that individual’s criminal behaviors. Identifying these factors can help promote more successful long-term outcomes that could potentially help reduce the recidivism rate by aligning individuals with the best resources to decrease their likelihood of reoffending.
The use of AI to help predict recidivism and aid in rehabilitation efforts is indeed promising. However, it must be used with additional care and thought. Despite its potential, AI cannot completely replace human judgment when determining an individual’s fate. The criminal legal system must use a balanced approach to ensure AI predictions do not unjustly deny opportunities for reintegration to those who could benefit the most simply due to biased data. AI predictions must be paired with human oversight and undergo regular audits to control for bias and ensure fairness. Continuing to address concerns around bias while integrating human oversight are critical components to ensuring AI complements justice rather than compromise.
Comments