
Since AI isn’t going away any time soon, journalists and journalism educators are in a bit of a bind when it comes to how best to use it or to help students use it appropriately. This week, we’re doing a three-part series on the blog this week that take that “overhead” view of generative AI from three key angles:
-
- The tools
- The potential perils
- The human angle
We covered the tools in Monday’s post, so if you missed it, you can catch it here.
As for today, let’s consider the perils of generative AI:
WE LOSE SELF-SUFFICIENCY: I will never disparage the concept of technological advances. Microwaves have allowed me to feed myself from college up to this morning without burning down my humble abode. Seat warmers have kept my rear end from flash-freezing to those leather-ish seats of the Subaru on most winter mornings in Wisconsin. The implementation of whatever stretchy stuff they’re making blue jeans out of these days has allowed me to keep fooling myself that my size hasn’t changed over the past five years.
Even as a journalist, I’m grateful that recorders have allowed me to significantly reduce the scrawl and shorthand I used to use while interviewing folks. Google has made it exponentially easier to ease my paranoia when I write something and then think, “Wait… are you SURE that’s right?” Even better still, I am entirely grateful that I no longer have to hold a piece of correction paper in between my teeth while banging out a story on an IBM Selectric typewriter, as I did back in high school.
The problem is that when we become overly reliant on technology, we are at the mercy of its functionality and lack the ability to cope when it fails. I’m not even talking about that “everything will cease to exist” failure. I’m talking about basic stuff that used to be common sense until computers just did the work for us.
(The analogy I immediately think of is my dad paying at a fast-food restaurant with cash. The kid punches in the total and it’s like $11.28. Then, Dad will say, “OK, here, let me give you three pennies to make it easier” after the kid assumes Dad’s just giving him a $20. Watching this kid try to do the math in his head because of those three frickin’ pennies is enough to make you weep for the future of humanity.)
Back when I was in doctoral school, the stats professor I had for my analysis of variance class made us do all the statistical calculations for an ANOVA by hand. This took forever and a day and ate up about 10 pages of notebook paper for each one. When we bitterly complained that we’d be using a computer to do this in a fraction of the time, he’d tell us, “Yes, but if you don’t know how to do each step, you’ll never know why it works the way it does. You’ll also never know if the computer is right or not.” As annoying as it was, I can still just look at an ANOVA result and figure out if I punched something in wrong, thanks to Dr. Osterlind.
There has always been an effort-free default option for everything, even back when you were learning to tie your shoes. If mom or dad was in a hurry, they’d say, “Here. Let me do that.” OK, fine, but if they never let you learn how to do it for yourself, you’d be totally screwed at this point. (Or into cowboy boots, shower shoes and a lot of velcro.) Some level of self-sufficiency has to be built into the process.
WE DON’T KNOW WHAT’S IN THE AI “BOX:” One of the biggest complaints I get about my intro to writing class, other than I keep having it at 8 a.m., is that I make the students buy the print edition of the AP Stylebook and read the whole thing. Random assignments and quizzes are part of the check-in approach I take to seeing how well this is working.
Students find out that for about the same amount of money, they can buy a digital version that provides them with a search-engine function, so they want that one instead. I tell them, “Once you move into the upper-level classes, that’s an option. Until then, you’re reading the damned book in print.”
My rationale is pretty basic: If you don’t know what you’re looking for and you don’t know if it’s in there, you’re at a disadvantage when it comes to finding it. Thus, if you read the book, you get a handle on the things that AP gets all hot and bothered about and thus you are making mental notes about the kinds of things you should look up. At that point, a search function is your friend, not a game of “Wheel of Fortune.”
Generative AI is pretty much the same thing: If you don’t know what’s in the “box,” you have no idea what to expect will come out of it.
Here’s an example of what I’m talking about. I entered a simple prompt that I figured a student in an entry-level civics class might toss in to avoid writing a short, basic essay:
- It assumed that criminal behavior was a street-level thing (drugs, gangs, robbery) as opposed to things that have done exponentially more damage (cryptocurrency scams, ponzi schemes, corporate fraud).
- It adopted the position often proposed by law enforcement, in which a strong, visible presence of police is a good thing for all people. Not everyone says that a) this is a fair system and b) it’s a good approach.
- It assumes a lack of education and opportunity drive criminal behavior. I seem to remember at least one guy who pretty much had a ridiculous amount of education and a ton of financial opportunities and still ended up in some pretty deep trouble.
This leads to another primary concern associated with generative AI…
BIAS IN, BIAS OUT: The way AI platforms are trained is by exposing them to tons of content from a vast array of sources in which it kind of picks out a “prototype” of each element it can ascertain from what it has “seen.” The problem with that is the more mainstream content is likely to dominate while the less mainstream content is likely to get shoved aside.
In addition, just because something is shown or written about in a certain way, it doesn’t necessarily follow that it should be a representation of a larger thing. This is how stereotypes are built and reinforced. Consider the following image creations based on several prompts:
A server at a restaurant:
A criminal in court:
A basketball player passing a ball:
A boss of a company at a podium:
(In the future, you will all be ruled by this one white dude. To be fair, the iStock paid generator did slightly better, but not great.)
I tried these prompts with multiple AI generators and these were pretty much the standard fare. Notice we’ve got all fit and relatively attractive looking people in here. The servers are all young and female. The criminals aren’t white, and neither are the basketball players. (To say nothing of the fact only one of them is passing and one of them has two basketballs for some reason.) Bosses are predominantly male and in at least in one set of responses all white and young.
Authors T.J. Thomson and Ryan J. Thomas at The Conversation found similar problems in an assessment of AI, noting that the image generators demonstrated biases from racism and sexism to ageism and classism. As more and more people continue to generate content, this kind of things is only going to continue to build on itself until we’ve got a really stereotypical and myopic view of a lot of how society looks.
DIMINISHED CRITICAL THINKING: Most of what journalism requires of us is to be nosy and to dig into topics that are of interest to us and our audience. When we’re doing our work, if something goes wrong, or a source messes us over or we encounter a strange plot twist, we figure out how to improvise, adapt and overcome. In a broader parlance, the whole driving force behind this job is critical thinking and problem solving.
The risk of relying on AI for too much is that we can cognitively atrophy and find ourselves in a journalistic rut. This already happens in some cases, as I’ve seen with stories written about my own institution. When we decided to do a reorganization, the university announced which plan was going to be favored and the local paper did a piece on the topic. The entire thing was basically direct lifts from the press release statement and several other response statements issued about the topic. No deeper examination, no interviews with the stakeholders and no other content than what was provided.
I’ve also seen it where people decide that rather than look for sources to react to important topics, they’ll scan social media and do screenshots of some of the loudest voices out there. It’s like, “Don’t strain yourself reaching beyond your keyboard, buddy. Let’s not try to do some actual work here…”
Students already tell me things like, “I can’t get a quote in here from (NAME) because they didn’t email me back!” To which I follow up with a few basic concepts like emailing again, picking up the phone and calling the person or even going to someone’s office and talking to them. This isn’t Woodward and Bernstein sorting through library punch cards or something. This is “Can-You-Fog-A-Mirror-level” journalism stuff. If I had a dime for every “You mean I should call them… on the phone?” response I got, I wouldn’t need this job.
If the AI tools can help aid in your critical thinking by challenging you to think about things differently, or to consider options outside of your personal experiences, that’s great. If they tell you to stop thinking for yourself, that’s a bad sign.
DOCTOR OF PAPER HOT TAKE: As much as I’m worried about kids getting lazy, that’s nothing new, really. When students figured out that the Encyclopedia Britannica could do a better job of explaining what an iguana is, they copied it straight out of the book. When students realized the kid next to them studied harder for the exam than they did, they “dropped their pencil” a few times and took their time leaning over to pick it up. When students realized they could punch search terms into a computer and get an answer better than the one they came up with, they found the copy and paste keys. Generative AI is just the next stage of this process and we’ll all eventually catch on and catch up.
What really bothers me about AI is when it basically becomes a black box.
I don’t have to fully understand how everything I use on a daily basis works, but I do feel better when I have a general grasp of a situation. For example, I might not be able to fix the pump on our well, but when I see smoke coming out of it and we have no water in the house, I can surmise what’s going on and call a plumber. If the plumber comes out and can tell me what happened, I can pretty much follow along. I’m fine with that.
What I’m not fine with is “heavy mystery time” in which we have no idea what a major piece of our lives is doing and people have an increasingly difficult time explaining it to even other people who work in that field. The reason is that it’s hard to trust things that can’t be explained, and even more, it’s hard to believe they will benefit people other than their creators.
I go back to this clip from “The Smartest Guys in the Room,” which chronicles the rise and death spiral of Enron. Financial expert Jim Chanos didn’t buy the bull that Enron was putting out and when he asked the analysts to explain how Enron was making its money in a clear and coherent fashion, he got the “black box” speech:
In other words, trust us… It’s fine… Until it’s not.
NEXT TIME: Why can’t we have nice AI things? Because people are… well… human.
