
One of the cool things about doing “mechanic stuff” is that parts manufacturers and tool companies would give you stickers for your toolbox when you purchased their wares. Enjoy an AI version of an AI toolbox. (via Pixlr)
After I posted about the need to smack around students who used AI to do their work, a colleague asked a really important question:
(P)lease allow me to respectfully play devil’s advocate here. AI is here to stay. We are not going to manage to get rid of it in our lives and in our classrooms. Students will keep using it no matter how many drums we beat for them not to. So, why don’t we instead embrace it and start teaching them how to properly use AI – responsibly and ethically? We can turn this into a tool for all. A friend, not a foe.
She’s definitely right in that AI isn’t going away and people will use it no matter what we say, something the folks at Arizona State University’s student newspaper learned the hard way last week. The State Press retracted 24 articles a reporter had written after staffers discovered the pieces were the work of generative AI.
It’s worth noting here that ASU is actively partnering with OpenAI to help students on campus see the ways in which generative AI could be used responsibly and ethically. That’s not to pin the blame on the university for the State Press situation, but rather to demonstrate that even with efforts to properly train and guide students, you’ll usually run into a chucklehead or two.
To take a look at AI from more of an “overhead” view, we’re doing a three-part series on the blog over the next week that will look at it from three key angles:
-
- The tools
- The potential perils
- The human angle
Let’s start with the tools:
HOW GENERATIVE AI WORKS: According to technology experts, generative AI models take large, complex pieces of information and break them down into simple elements that the AI system can retain easily and replicate on demand. The technology is essentially “trained” by introducing it to millions and millions of pieces of content, which it uses to make sense of concepts and then generate new material.
AI scholars at MIT have noted that this approach is not new, in that computers have done these kinds of things on data sets and science hypotheses for decades. What is occurring now is just an outgrowth those early efforts, with computers consuming vast amounts of written and visual material, breaking it down into simple pieces and then recreating new things based on the “rules” it learned during its examination of the content.
This is also how humans learn, as we learn how to write in the inverted pyramid format or paint a picture in the style of one of the great artists, like Picasso or Renoir. Theoretically, what makes this different is that humans are taught other things like morals and ethics (as well as societal norms) that serve as kind of a traffic signal for what they “should” or “shouldn’t” do, as opposed to just what they “can” or “can’t” do, based on the requirements of a prompt.
AI TOOLS THAT CAN BENEFIT YOU AS A JOURNALIST: Of all the analogies I’ve used over the years, the concept of putting “tools in your toolbox” has been the most frequent one. As much as it seems reductive, I like to think of each talent I have, skill I develop or lesson I learn as a tool I’m putting in a toolbox for later use.
In terms of AI, there are tons of great tools out there that can benefit you as a journalist, as they can automate mundane tasks, prompt you to think of things you otherwise wouldn’t and generally make life easier on you. Consider these options:
TRANSCRIPTION: One of the most time-consuming things journalists deal with is taking audio interviews and turning them into useful text for stories. AI has made transcription services both readily available and reasonably accurate. Tools of this kind, such as VG’s Jojo and Otter.ai, use algorithms to decipher speech patterns, pick through background noise and convert sound to text.
IMAGE GENERATORS: These tools have been the source of great fun for people who want to see what kinds of strange combinations of elements they can pair and how the image generator will display their humorous whims. However, AI image generators can assist journalists who are covering serious topics.
Newsrooms have long used photo illustrations and artists renderings to accompany stories in which more traditional means of capturing visual content isn’t possible. Image generators, like Image Creator from Microsoft and versions of DALL-E from OpenAI, can use text prompts from users to generate a wide array of potential visuals. As is always the case in journalism, any kind of illustration or created work should be labeled as such.
RESEARCH: In journalism, good writing is predicated on good reporting, which means we need to dig around a lot. Finding basic facts can be easy through current search engines like Google and Bing, but several companies are constructing AI tools that will allow investigative journalists to do significant deep dives in a fraction of the time. Google introduced Pinpoint in 2024, which is meant to help journalists and other researchers dig through vast quantities of documents to find specific content within the collection. Google states that a Pinpoint collection can contain up to 200,000 documents, including written text, images and audio files.
Other AI tools, like Artifact, which was recently purchased by Yahoo, can be used to create quick summaries of articles and files for you to give you a general sense if the piece is worth digging into more deeply or if it doesn’t fit your specific needs.
FACT CHECKING: The journalistic fact-checking motto has always been, “If your mother says she loves you, go check it out.” Thanks to advances in AI, that might be a lot easier than it used to be. Tools like Chequeado’s Chequeabot are capable of taking factual statements and comparing them to vast repositories of knowledge to determine the accuracy of those statements. These tools can help assess the validity of data-based statements through to public declarations that governmental officials make, in a quicker and more accurate fashion.
WRITING: A number of media organizations have attempted to use chatbots and other similar AI tools to write content for publication, with varying levels of results. Gannett attempted to automate some of its sports coverage, only to stop once it was clear the readers weren’t thrilled by the results. Sports Illustrated even went so far as to create AI staffers to augment their site, something they quickly pulled back from once the situation was discovered. This approach to using some of these content generators is often where problems occur and society at large tends to freak out. That said, it’s important to know how these tools work and that they can be exceptionally helpful. Tools like Writesonic, Notion AI and Text Blaze can assist you in restating material in new and innovative ways, offering suggestions as to how to approach a new topic and assisting you in search-engine optimization efforts. The key here is that these tools are meant to “assist” you, not do all the writing for you.
These are just some of the tools and options out there for you as a journalist. The Society for Professional Journalists maintains a giant list of similar tools for your consideration here.
DOCTOR OF PAPER HOT TAKE: I’ve gone back and forth how best to approach AI, because, like so many other tools we use in life, it has both stated purposes and potentially problematic misuse options.
A hammer is a great tool and you can build a lot of cool stuff with it, but you can also use it to bash in someone’s head. The same concept is true of a knife: You can teach a kid to use a knife carefully and responsibly to help make dinner, while simultaneously explaining that, no, you can’t stick it into your sibling’s head because they took the last Mountain Dew out of the fridge.
(I suppose we could also argue that AI might be more like cocaine: We can’t teach you to “responsibly” use it and in merely introducing it to you, the risks outweigh the rewards. I don’t like that analogy, but given what people have been doing with AI, it perhaps merits a deeper look.)
What AI really lacks at this point that most tools have are things like an instruction manual and set of safety features to prevent unintended disasters. The instruction manuals tell you what each switch or button does on a tool and also how to avoid doing something pathologically stupid. The safety features also limit you in some ways, like putting a guard over a table saw’s blade or having a fuse blow instead of letting the whole thing catch fire. AI feels more like those sci-fi movies, where a human discovers a piece of alien technology and is just kind of winging it.
The other thing that makes AI more dangerous than other tools is that we don’t have learned masters under which we can apprentice, like we would in learning to use other tools. When I started working at the garage as a teen, I had a guy there who knew how to use every tool in the place. He helped me on everything from the basics, like which cars used SAE tools and which ones needed metric ones, to the big safety things, like how to prevent a tire machine from taking off my head with a giant iron bar. Here, we’re all relative newbies and as much as I like the idea of learning from my mistakes, I’d prefer to know if something is going to take my head off before I start playing with it.
NEXT TIME: The significant concerns associated with AI technology.