top of page

Disincentivizing ‘work smarter not harder’: A.I. enhanced crime and sentencing.

  • Ameerah Thomas
  • 2 hours ago
  • 4 min read

Broadly, Artificial Intelligence (A.I.) is an emerging technology that has transcended the bounds of traditional academic and professional fields, and its practical implications for the criminal legal system are constantly evolving. Some of this technology, like generative A.I. chatbots, is currently in use by courts, legal professionals and public interest organizations to automate routine interactions, simplifying complex legal processes. Through empowering self-represented litigants and reducing burdens on legal staff, A.I. serves as a means to bridge the access-to-justice gap. In February 2024, former Deputy Attorney General Lisa Monaco delivered remarks on the “promise and peril of A.I.” highlighting the technology as a valuable tool for law enforcement. Monaco also discusses how the Department of Justice (the Department or DOJ) has used these tools to optimize its drug investigations, analyze tips from the public, and synthesize large volumes of evidence. In those remarks, Monaco equally emphasizes the potential risks of A.I. use in amplifying harm, and expediting the creation of harmful content, like child sex abuse material, and deepfakes of trusted sources.

 

Within the criminal legal context, there are two recent, novel examples where A.I. was used both in furtherance of a crime as well as a means to avoid apprehension from law enforcement. While investigations are still ongoing, A.I. tools like ChatGPT and Meta A.I. were used in separate attacks in New Orleans, Louisiana and Las Vegas, Nevada. In New Orleans, U.S. Army veteran Shamsud-Din Jabbar, is alleged to have carried out a violent attack on New Year’s Day, using Meta’s A.I. smart glasses to survey the area and go undetected. The Meta A.I. smart glasses allowed Jabbar to record videos and capture photos hands-free and would have allowed him to livestream the attack, if desired. The incident in Las Vegas is potentially one of the first where an A.I. chatbot was utilized to build an explosive device. U.S. Army Special Forces Master Sgt. Matthew Livelsberger is alleged to have injured several people on New Year’s Day after triggering a vehicle explosion, with calculated quantities of explosives fireworks he obtained from ChatGPT.

 

Several reports have warned about the evolving cyber-threat landscape. The FBI cites the misuse of A.I. technology as a major contributor to the sophistication of criminal and illicit activities because it is publicly available and customizable. In 2023, the Europol Innovation Lab warned about implications of large language models and generative A.I. tools like ChatGPT because they can amplify traditional harms created from cybercrimes and disinformation campaigns on a much larger scale. In 2024, the FBI warned the public of an increased threat of cyber criminals using the technology and manipulating content to perpetuate scams. The Department of Homeland Security (DHS) also produced a report in the last year on the impacts of A.I. misuse on criminal and illicit activities in coordination with public-private partners from the intelligence community, military, and Meta to name a few.

 

In its 2024 annual report to the U.S. Sentencing Commission—about six months prior to the aforementioned attacks—the Department put forth recommendations to ensure that harsher penalties are available in cases where A.I. was used to commit crime or avoid apprehension for crimes, citing the need to account for technological changes in how individuals may commit crimes.  This could have a significant impact on criminal defendants alleged to have engaged in malicious use or lawful, but dangerous, use of an A.I. tool resulting in a violation of criminal law.

 

In its appeal to the Commission, the Department seeks to impose harsher penalties for offenses made more dangerous by the malicious use of A.I., and if needed, seek reforms to existing sentencing enhancements in order to effectuate accountability and deterrence. However, the Department falls short in proposing recommendations that could impact offenses resulting from lawful but dangerous uses of A.I. Perhaps it remains unclear where the line eventually must be drawn for unacceptable uses of A.I., but the DOJ’s recommended Chapter 3 enhancement could be broad enough to cover both scenarios. The Department’s aim is not only to pursue harsher penalties for known offenses made more dangerous by the misuse of A.I., but seeks to address a knowledge gap in this new landscape as well. In part, this could mean sending an early message to individuals utilizing the technology, while more research is done in the interim to identify the scope and scale of A.I. 's impact or risk to the criminal legal system.

 

A particularly challenging issue at this intersection of criminal law and technology, will be advancements in the regulatory nature of A.I. In an Executive Order (EO) by President Biden “to realize the promise of A.I. and avoid the risk,” the previous administration sought to implement safeguards around the technology. Although the Order did not provide for a meaningful enforcement mechanism, a later policy directive was given to direct federal agencies to show their A.I. tools were not harmful to the public or to stop using them. According to the DHS report, although some technology developers have taken steps to reduce the risk for harm in their A.I. products, researchers have found that the A.I. tools are easily manipulated into providing prohibited instructions and information, with the safeguards being circumvented through a kind of work-around visual “art” prompt. President Trump has peeled back the Biden Administration’s efforts on establishing parameters for the use of A.I. citing these policies “act as barriers to American A.I. innovation.”

 

With no certain terms on the stable regulation of A.I. systems as they continue to advance–and as criminal laws across jurisdictions try to keep up–other challenges may arise for criminal defendants and practitioners. Where malicious use is hard to distinguish from dangerous use, we may see one defendant become two, or three, or four, as the range of capabilities—and culpability—of A.I. model-makers and individuals putting dangerous content online, are not yet realized. Today, we can disincentivize A.I. enabled crimes through enhanced sentencing but a forward-looking approach into the ever-evolving technology suggests criminalization by enhancement and abetment may not be sustainable.

Recent Posts

See All

Comments


Commenting has been turned off.

The Criminal Law Practitioner is published by students at the American University Washington College of Law in collaboration with the Criminal Justice Practice & Policy Institute. Copyright ©2021. All Rights Reserved.

bottom of page