By Darren Smith, Weekend Contributor
The expanding adoption of Generative Artificial Intelligence (GenAI) is finding utility in nearly all areas of human thought and expression. Its speed, increasing sophistication and accuracy promises not only unique ideas but shows ability in automating ordinary processes which hopefully afford people savings in time and resources; enabling them to focus on the bigger picture and more important duties. There are however some worrying trends that can come up on the reliance of such technology in areas it is not yet suited. This article will focus on one area: police and criminal justice reporting.
A common complaint expressed in the law enforcement world and for that matter many other professions, is that an officer spends more time on paperwork than performing actual duties. In recent years, one solution promises a fundamental change to reduce the amount of time devoted to report writing through GenAI. The technology has developed to such a degree that industry offers law enforcement agencies the ability to use video recordings provided by body cameras or dash cameras to not only transcribe what was said by persons in the video (via voice recognition) but assemble the pertinent audio and video information into an actual police report that can be reviewed, corrected if necessary, and signed off by a commissioned law enforcement officer as his official report of the incident.
The promise made by GenAI is that officers will spend less time on overhead and can instead devote greater time to patrol and investigative duties elsewhere and thus be more efficient and less interrupted by paperwork.
To understand the problem let’s look at a brief synopsis of some milestones of police report generation over the past forty five years.
With most agencies in the United States, the early 1980s consisted of sparse use of computer systems by line officers—they were typically database implementations for storing information of persons, vehicles, wanted or stolen records and as communications systems between agencies—rarely used by officers when completing crime reports and such. Most reports and citations were hand written or typed and because of the amount of time used, especially in the case of handwritten forms, the amount of information conveyed was less yet the time requirement was high. As the 1980s began to close personal computers began to be adopted by agencies for officer use and eventually they systems became more integrated and greatly more efficient. The speed of report writing not only increased, but the retrieval time and search ability was unmatched. Paper records and microfiche now became archaic.
Yet with the ease and efficiency of record keeping greatly improved, paradoxically so did the volume of information created, or required. This is certainly not unique to law enforcement as it has also been the case with patient charting with healthcare and in other fields. The efficiency invited the opportunity to create more data and it was then expected.
Soon it became no longer necessary for a line officer to drive to an office to complete reports as in-car systems became standard practice, eventually replacing the such things as paper citation forms given to violators but instead typing it in electronically, filing it automatically with the department and the courts, and printing off a copy for the violator. Nearly all the officer’s paperwork could be completed electronically and in the field.
The next advance began with what is commonly referred to as DashCams and BodyCams, electronic audio and video of law enforcement officer encounters with the world and individuals. Both have proven to be very useful in terms of correctly capturing information for which can be used by the criminal justice system. The cameras are greatly useful in correctly documenting events witnessed by the officer and in most respects are superior to not always reliable memory of those involved. In some ways they have reduced the amount of paperwork since the officer wearing the camera can simply provide a written synopsis in the written report and then reference an attached video as evidence. Or of this is not fully permitted, watching the video while composing the written report served to prompt the officer to write the report completely and accurately as depicted in the video. The GenAI service can take this to a high level of efficiency by generating most of the report in draft form whereupon the attesting officer then makes any corrections and fills in any gaps or external details. Such reports generated by AI mimic that of the conventional standard “style” of a crime report and from a workflow perspective the officer assumes a role that is in some way more of an editor rather than an author.
On a side note, when electronic videos of police cameras came to the attention of the public, the compliance requirement of freedom of information/public disclosure law went far beyond that of ordinary written reports which could relatively easily be redacted and disseminated when appropriate. Now departments must be tasked with being video editors to redact non-disclosable information such as faces, identities, words, addresses and other private information. The storage requirement for hours of video for sometimes hundreds of officers has become costly and added an additional burden.
Any new technology does have concerns that might offset some of the benefits. Some of the concerns are at what point does the AI become the primary author and the officer the rubber stamp approver. I present some open questions on the technology:
Will a low number of GenAI providers of police reports lead to a near monopoly of companies having access and control of information of most law enforcement agencies?
Will reliance on GenAI lead to an atrophy of skill in report writing in the ordinary sense by employees and if accuracy improves make them prone to overlook the occasional but highly consequential errors?
Law enforcement agencies have strict controls over dissemination of records for current or in-progress investigations and intelligence. Is GenAI use a vector by which outside actors can infiltrate police and government agencies? Those who might hack in to the GenAI providers could learn of investigations or forewarn wanted persons of an upcoming arrest, or watch the agency via the AI input it submits?
Who controls the data given to the GenAI provider and is it subject to proper oversight? Will there be a temptation to sell the information to third parties?
How can bias be controlled in the GenAI response? The output is only as good as the input or the algorithm. Could the AI develop a bias as a result of incorporating the data it generates?
Are revisions and updates to each report considered work product and/or are they subject to discovery?
If the GenAI report is mostly completed by something other than the officer, how true is the officer’s testimony as to what he believed was happening since his mind did not actually create most of the report?
Is the present implementation of GenAI reports sufficiently efficient to mitigate the time required for necessary corrections and edits by the signing officer?
If the procedure is to plug the video/audio into the GenAI application, receive the generated draft, then make corrections and certify under penalty of perjury that the report is a true and accurate declaration of facts…are officers willing to risk a false swearing or perjury charge if computer generated data was inaccurate and overlooked?
Will GenAI created reports be considered expert analysis and will the output be challenged by the courts?
Are police administrators sufficiently adept and understanding of the artificial intelligence technology to fully understand managing or configuring the software?
Ethically have we fully considered what we are doing with GenAI with regard to justice? Those civilians and others who are subject to the GenAI reports have the most to lose as their lives can be changed markedly for the better or for the worse. Have we become so lazy and indifferent to them that we cannot be bothered to completely write a report ourselves?
What would become of the future of criminal justice if artificial intelligence is incorporated without restraint or consideration of the consequences? I can foresee a few areas where it promising use such as for generating a picture of an unidentified assailant from the descriptions of a witness, analyzing trace evidence, finding trends in data, and such. I do have reservations in what is made of the technology when it can be inexpensively replicated and used in place of a commissioned law enforcement officer.
An example would be incorporating AI into actually enforcing the law, where a camera films a speeding vehicle, a stop light violation, or eventually a strong armed robbery. The AI then identifies the persons involved, generates the report, and makes a charging decision with less and less human involvement. Are we to allow AI systems standing to enforce the law and are we going to question it since we have become so accustomed to its reliability and the fact that it is used everywhere? It might sound like “future shock” but we should consider how far we are willing to accept the convenience of low cost surrogates of our responsibility.
By Darren Smith
The views expressed in this posting are the author’s alone and not those of the blog, the host, or other weekend bloggers. As an open forum, weekend bloggers post independently without pre-approval or review. Content and any displays or art are solely their decision and responsibility.
Click this link for the original source of this article.
Author: Darren Smith
This content is courtesy of, and owned and copyrighted by, https://jonathanturley.org and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.