What happens when police use AI to draft their incident reports?

(We’re not quite here yet, but it’s a little disconcerting how I keep finding parallels between RoboCop and reality. Also, that Kurtwood Smith was somehow less threatening here than in “That ’70s Show.”)

THE LEAD: Some police organizations are experimenting with AI, in which ChatBots are writing the first drafts of their situation reports based on what the officers’ body cameras capture.

“They become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate,” said Axon’s founder and CEO Rick Smith, describing the new AI product — called Draft One — as having the “most positive reaction” of any product the company has introduced.

“Now, there’s certainly concerns,” Smith added. In particular, he said district attorneys prosecuting a criminal case want to be sure that police officers — not solely an AI chatbot — are responsible for authoring their reports because they may have to testify in court about what they witnessed.

“They never want to get an officer on the stand who says, well, ‘The AI wrote that, I didn’t,’” Smith said.

The pilot programs have found that the reports that once took 30-45 minutes to draft can be done in a matter of seconds. To kind of hedge their bets on the issue of how much they should be leaning on the technology, some departments are using the AI on misdemeanors and petty crime.

Aside from the idea that the computer might be doing the officers’ “homework” for them, legal scholars and civil-rights activists are concerned about the impact this could have on society as a whole:

“I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing,” said Ferguson, a law professor at American University working on what’s expected to be the first law review article on the emerging technology.

Ferguson said a police report is important in determining whether an officer’s suspicion “justifies someone’s loss of liberty.” It’s sometimes the only testimony a judge sees, especially for misdemeanor crimes.

 

DOCTOR OF PAPER HOT TAKE: Accuracy and legality lead the list of my concerns here. At one point in the article, the officer notes that the AI included a detail he didn’t remember hearing. That could be the AI capturing something real or it could be fabricating something that the officer then kind of adopted as true.

Experts and users have found AI can engage in “hallucinations” where it presents something untrue as fact. It’s kind of funny when AI tells us that the downfall of Western Civilization began when the coach refused to put Uncle Rico in at quarterback in the ’82 finals. It’s less funny when it tells a court of law that you threatened a cop who pulled you over for speeding.

The officers interviewed for the story mention that they’ve become more verbal in their interactions with the public, which allows the body camera to capture that information and thus improve the AI report.

In this kind of case, it feels more like transcription than creation, which seems safer, but who knows. What would be beneficial for reporters in cases like this would be to get the AI-based reports and the officer’s body-cam footage to do a side-by-side comparison.

Legally speaking, I would be curious to know what levels of access journalists could have to the AI version of a report as well as the final version of a report. Police reports and court documents are public records, but some internal memos and drafts of public items can occasionally be considered off limits. In addition, it’s technically not being created by a public figure, but it’s the ramblings of a computer program. Who can have access to what, when and where and how is interesting here.

It’s also interesting to see how well these things hold up in court compared to other reports, witness testimony and so forth. As with anything new, there’s going to be a learning curve and development issues, with the older technology probably still being better.

When we first started seeing automobiles, they could barely break into double digits in terms of their mph speed. Meanwhile, horses could literally and figuratively run circles around them. As time went on, cars clearly became the faster mode of transportation, but it took a while. It’ll be interesting to see how many lawyers start asking questions like, “So, Officer Smith, did you write the initial report of this or did you rely on artificial intelligence to do it for you?” and then showing off all the stupid things AI has written to undermine AI’s credibility.

The folks in the article who distrust the AI process have noted concerns about racial targeting and other such issues in terms of bias against people traditionally mistreated by legal wrangling. We have seen AI generate some of those kinds of biased reports here, and it is a valid concern. I would probably go a step beyond this, only to say that I’d be really concerned in general for anyone who is being accused of criminal activity while the police are working the kinks out on this system. The article notes that the crimes are generally “low level” but that doesn’t make me feel much better if I’m on the other end of an AI disaster.

 

Leave a Reply