How AI “expert sources” have duped journalists and four tips on how to avoid being the next victim

 

Meet Elizabeth Hubbell, a 25-year-old skin-care expert who is willing to be a great source for your next story on anything makeup or skin-care related. She’s actually completely fabricated. Her picture came from an AI generation site and her name is a combination of my car (Betsy) and a baseball player whose card I had laying around (Carl Hubbell). Careful. It’s dangerous out there…

When it comes to doing interviews, I always tell students they need to do them in person.  In response, they often look at me like I’m asking them to use a teletype machine or some semaphore flags. It’s easier, faster and more convenient for both parties if they can do a text, a chat or an email interview, the students say.

I argue that the face-to-face interview allows for a deeper connection for profile and feature pieces. This approach also can prevent sources in news stories from weaseling out of answers they could otherwise work through via several drafts of an email. Plus, if I spend some time in the source’s environment, I can probably find a personal effect that could give us something to talk about, like a family photo, a kid’s drawing or a sports item. At the very least, it’ll help with scene setting.

Apparently, there’s another good reason for my approach these days: Your easy-to-access, extremely helpful, expert source might be AI:

Since the launch of ChatGPT in 2022, anyone can generate comment, on any subject, in an instant.

It is a technology that appears to have fuelled a rise in expert commentators who have appeared widely in national newspapers but who are either not real, not what they seem to be or at the very least have CVs which do not justify their wide exposure in major newsbrands.

The rise in dubious commentators has been fuelled by companies that charge the PR industry in order to share quotes via email with journalists who have submitted requests for comment.

Journalist Rob Waugh found that in a number of cases, digital outlets were mass-generating content from these supposed experts, giving everyone from news journalists to PR practitioners the exact the quote or information they needed on a wide array of topics. However, when challenged to engage more deeply regarding who they are or what they have done in life, the “sources” suddenly had difficulty:

She has been quoted in Fortune talking about “loud budgeting” and by Business.com talking about the best countries in which to obtain a business education (both sites are based in the US).

A profile on Academized describes her as a “biochemist and science educator”. The same byline picture also crops up on a publisher called Leaddev, for someone called Sara Sparrow. Rebecca Leigh has written for DrBicuspid.com about how to write a business plan for your dental practice where she is described as a writer for Management Essay and Lija Help (two online writing services).

When challenged via email to do something that would be difficult to do with AI image-generating software (send an image of herself with her hand in front of her face) or prove that she was an environment expert, Rebecca stopped communicating.

One AI source, “Barbara Santini,” was particularly prolific in the volume and array of topics she could cover for journalists. Waugh found this roster of publications that had included Santini quotes:

She has been quoted in The Guardian talking about the benefits of walking (paid content), in Newsweek talking about white lies, Marie Claire talking about the meaning of money, the Daily Mirror talking about the benefits of sleeping with your dog, in The Sun talking about sexual positions, Pop Sugar talking about astrology, and Mail Online talking about how often to change your pillow.

Santini was recently quoted in a BBC article examining the lifelike responses of AI to Rorschach tests used by some psychologists saying: “If an AI’s response resembles a human’s, it’s not because it sees the same thing but it’s because its training data mirrors our collective visual culture.”

Despite her ability to be all knowing and wise, Santini apparently couldn’t receive phone calls, a relatively easy giveaway that the “person” on the other end is AI. Waugh also found other examples of journalists who were getting taken for a ride by an AI source, including one case where the non-human pitched a sob story about breast cancer survival:

“Seeing my scarred chest in the mirror was a constant reminder of what I had lost,” Kimberly Shaw, 30, told me in an emotional email.

She had contacted me through Help a Reporter Out, a service used by journalists to find sources. I cover skincare and had been using the site to find people for a story about concealing acne scars with tattoos.

<SNIP>

Shaw’s experience may not have been relevant to my acne story, but it tapped into the same feelings of empowerment and control I wanted to explore. Thinking she could inspire a powerful new piece, I emailed her back.

But after days of back-and-forth conversations, something in Shaw’s emails began to feel a little off. After idly wondering to my boyfriend whether she could be a fake, he suggested that I run the emails through a text checker for artificial intelligence.

The result was unequivocal: Shaw’s emails had been machine-generated. I’d been interviewing an AI the entire time.

As a result of Waugh’s story, a number of these information clearinghouses have tried to cull their ranks of AI “experts” while the deceived publications have retooled or removed the stories with fake people in them. Although the founder of one of these “expert mills” blamed much of the situation on “lazy journalists,” he kind of gave up the game a bit when it came to explaining why these platforms don’t prevent the frauds from gaining access in the first place:

Darryl Willcox, who founded ResponseSource in 1997 and sold it in 2018, says that the simplicity and speed of platforms like ResponseSource is key to their appeal and that attempts to add authentication risk slowing down the system.

Willcox said: “The other factor which complicates things a little bit is that these platforms are quite an open system. Once a journalist makes a request they can be forwarded around organisations, and sometimes between them, and often PR agencies are acting for multiple parties, and they will be forwarded onto their many clients.”

In other words, “If we slowed down to make sure things were accurate, we wouldn’t be as appealing as we want to be.” Eeesh.

So what can you to to avoid quoting a fake person? The overarching theme is basically, “Don’t be a lazy journalist,” but here are a few more specific tips:

TRUST, BUT VERIFY: The old Russian proverb really comes into play here and for good reason. I often say that paranoia is my best friend and has kept me out of a ton of problems. To that larger point, not only did I click on every link I could find in Waugh’s story, I also Googled the hell out of Waugh himself. Why? I imagined that it would be the most epic “Punk’d” moment on Earth if the media world was flocking to this story about AI screwing with journalists, only to find out that Rob Waugh was also an AI fake. I found LinkedIn, X, Bluesky, media staff pages and at least a dozen photos. I wouldn’t bet the house on the fact he’s real, but I’d probably bet the lawn tractor.

This can be harder in situations like the one involving the cancer scammer, as regular people tend not to have as big of a social media presence or digital footprint. That said, even regular people under the age of retirement should have left a few breadcrumbs out there for you to find.

KICK THE TIRES: If you can’t find the person clearly through a digital search, feel free to play a little game of 20 Questions to see if you can get some things ironed out. Experts who have kicked the tires on a few bots can offer you specific ways to ask questions that will tend to ferret out fakers. The author in the cancer-scam story revealed that asking for specific photos based on prior conversations can be helpful as well.

I learned about this kind of thing in trying to defeat scams when it came to buying sports memorabilia. When unknown sellers offered either exactly what I wanted when I couldn’t find it anywhere else or provided me with a ridiculously low price for something I knew should cost more, the pros who had been around the block a few times suggested I ask the seller to “coin the image.”

What this meant was that I wanted the person to take a picture of the item with a coin (usually asking for either heads or tails, or maybe even a specific coin) so I could tell they had the item and weren’t messing with me. Turned out, that advice helped me dodge a bullet or two. As weird as it might seem, asking someone to take a picture with their left hand raised or holding a quarter with “heads” showing might help you avoid a problem.

MEET IN PERSON: Again, this is the most obvious one to suggest. If you meet a person, in person, it’s a pretty safe bet that you can consider them real. The rest of the stuff (Are they the expert they claim to be? Did they really do what they say they did? Do they actually have cancer?) remains a risk without substantial additional reporting, but at least you’ll know they exist.

If that can’t happen for legitimate reasons (the person lives too far away etc.), look for other ways to get some human connection with the source. That could be a Zoom/Teams/Whatever video chat or an actual phone call at an actual phone number. In the cases where the frauds proliferated, it was pretty clear that the only connection between the source and the journalist was through a keyboard. That’s especially dangerous when you don’t have a prior relationship with a source.

WHEN IN DOUBT, DO WITHOUT: At the end of the day, there is no journalistic rule that says you have to use a source, a quote or a “fact” just because you have it. If you don’t feel comfortable with how a source is providing you with information or you aren’t 100% sure this person is a person, it’s better to leave that source out of your story than it is to run the risk of getting bamboozled.

If you say, “Well, the whole story will fall apart without this one source and I can’t get anyone else to provide me with this information,” maybe that’s more revealing than anything else we’ve said here.

 

A Lot at Steak: How U.S. Education Secretary Linda McMahon’s AI Blunder Led to Marketing Gold

THE LEAD: Secretary of Education Linda McMahon managed to confuse AI (artificial intelligence) with A.1. (steak sauce) while delivering her comments at the ASU+GSV Summit last week.

The gaffe became fodder for all sorts of internet humor, but company responsible for making the condiment saw an awesome opportunity and took full advantage of the mistake:

A.1. Sauce capitalized on McMahon’s blunder by posting an Instagram post on their verified account saying, “You heard her. Every school should have access to A.1.”

“Agree, best to start them early,” the picture attached to the post reads.

Other Instagram users loved the response from the Kraft Heinz-owned brand. One user even commented, “I will be buying a bottle or two because of this post.”

 

KRAFT-ING MARKETING GOLD AGAIN: Kraft Heinz, which markets A.1., has a decent track record of grabbing a cultural moment and running with it. The company took advantage of the “Barbenheimer” explosion by introducing a pink “Barbie-cue” sauce and has also linked a ranch dressing to Taylor Swift. In each case, the company drew attention to its brand, garnered some nice free media publicity and avoided the kinds of gaffes often associated with trying to ride a trend.

Despite the random uncertainty in the market these days, the stock closed up on Friday and has shown a gain from $27.60 on April 9 to $29.33 on Friday. Although that time frame corresponds with the comments McMahon made about A.1., it’s a bit simplistic to say the gains were solely connected to that mistake.

In its rating of best food stocks to buy according to billionaires, Insider Monkey rated Kraft Heinz at the top of the list for a number of reasons, including global supply chain and reliance on AI (not A.1.) for keeping factories humming. Still, people are saying they’re buying a bottle or two of the steak sauce as a result of the gaffe:

So far, A.1.’s loyal fans seem to be in support of its “new sauce.”

“My husband wants a bottle for his desk,” one commenter wrote under the brand’s post. “He teaches middle school, at least until they replace him with A.1.”

 

BLOG FLASHBACK: Kraft Heinz isn’t alone in taking advantage of dumb situation with some marketing genius. As we noted back in 2018, Country Time Lemonade drew a lot of attention after it created its “Legal Ade” defense fund for kids who had been fined for not having a business permit to run their lemonade stands.

Like the A.1. effort, this worked because it was on the right side of the argument, made fun of the utterly ridiculous and didn’t run a significant risk of hurting its brand with this maneuver.

Other organizations tend not to be as lucky when they jumped in on trending hashtags or didn’t think about potential blow back before entering the larger discussion.

DISCUSSION TIME: What do you think Kraft Heinz should do next? Ride the wave? Leave it alone? Try something else? Also, what other marketing maneuvers have you seen that tried to connect with a trend? Did they succeed or fail in your eyes? Why?

Time to freshen up your book shelves: Updates on “Dynamics of News Reporting and Writing,” “Exploring Mass Communication,” and “Dynamics of Media Writing” textbooks

Fresh off the press, I got my stack of the third edition of “Dynamics of News Reporting and Writing.” Never in my wildest dreams did I imagine I’d be lucky enough to get this far.

(NOTE: I’m still on break for a bit, but I needed to break the seal on the blog because a) some of you are already back at the classroom grind and b) I promised Sage I’d let people know what’s up with the textbooks I’m doing. I’ll probably pick up again after a week or so, or whenever the pinball machine I’m working on really ticks me off… — VFF)

THANK YOU FOR MAKING THIS NECESSARY: I got home last night to find a heavy box on the porch from Sage. Inside were my author copies of the third edition of “Dynamics of News Reporting and Writing,” which pressed over the holiday break.

I wanted to take a moment to thank all of you out there for, as Yogi Berra once put it, making this edition necessary. Somewhere along the way, you all made a choice to give me and this book a chance, and for that, I’ll be forever grateful. I know it’s not easy changing books for a class, adopting a new textbook or assigning any textbook in today’s “Textbooks are the overpriced devil, man…” world.

My goal in every textbook is to practice what I preach: Focus on audience-centricity. I want you and your students to get a ton out of these books and I want to make sure I never lose sight of who is out there and what you want/need out of me.

(The second goal is to adhere to my Polish-Catholic roots of feeding you as much as humanly possible. Whether it’s pierogi or information, we’re going to stuff you to the gills. Thus, the book updates and the blog: If you need ANYTHING I didn’t cover, tell me and it’s going up on the blog.)

WHY YOU SHOULD CARE ABOUT THIS EDITION: Well, for starters, the cover is wicked cool… OK, maybe I’m the only one who cares about that. Let’s look into this a bit:

  • Artificial intelligence: This is the 800-pound gorilla in the room these days when it comes to anything having to do with content creation. Chapter 2 has been completely revamped to deal with how best to think about AI, what it’s good for in terms of media and why we aren’t ready to let RoboCop 2 take the keyboards out of our hands. In addition, more on AI and critically thinking about it are infused in the remainder of the chapters. We do more than a broad overview, instead focusing on how the tools can benefit you in the field and what you need to watch out for.
  • Audience Centricity: Not only has Chapter 1 gotten a refresh, but the rest of the book has gotten some additional elements that will help you figure out how best to use media tools to reach your audience, whatever that audience may be. Now, more than ever, we see shifts in what social media platforms can do, how news outlets provide content and who pays attention to our work. To make sure we’re all doing the best we can, we need to know who we’re trying to serve, what they want from us and how they prefer to receive it. Chapter 1 gives you the goods on the first part of that sentence, while the remaining chapters focus on the latter two parts.
  • Thoughts from a Pro: We have some of our tried-and-true pros back to offer their thoughts on what you need to know and why you need it, as well as some fresh faces with some new ideas. In addition, each pro gives us a few thoughts they have on AI as it relates to their work and the field as a whole. That should be helpful in demonstrating how significant (or maybe insignificant) AI is in various parts of the field, along with suggestions from professionals as to how best to use it.
  • Legal Wranglings: The law has been changing quite a bit (and apparently will continue to change in the upcoming few years), so keeping media operations on the right side of the law continues to be an ongoing challenge. With fresh examples and updates to legal outcomes, we give you a look at where things tend to stand in regard to reporting and writing as of this publication. (And I’m sure by the time I’m done writing this post, TikTok will be dead, brought back, challenged again and killed again like Jason Voorhees, so that’s why we have the blog…)
  • More goodies: As always, Sage is a treasure trove of add-ons and extra stuff for every book I do. The folks there have tons of lecture stuff, PowerPoints, test banks, exercises and more at the ready beyond what I’ve put into the book and the blog.

If you are interested in getting access to the new edition (digital, print or otherwise), along with all the extra stuff Sage has added, feel free to hit me up through the contact page or go directly to Staci Wittek at: staci.wittek@sagepub.com

She is truly the best person I’ve ever worked with in terms of sales and marketing and generally being awesome at book stuff.

But wait, there’s more…

TIME TO GET (MEDIA) LIT(ERATE): Back in August, “Exploring Mass Communication” hit the market, once again proving I either have too much time on my hands or I’m too stupid to say no to a project. In any case, this intro-level textbook turned into what I would like to say is the best book I’ve done to date.

I get the best mail from Sage…

This book is GREAT for any introduction to mass media/mass com class, but it’s even BETTER if you’re trying to teach media literacy to a nation of freshmen and sophomores. I didn’t realize that until someone told me, “Hey, why did you tell me you wrote a media-literacy text?” Turns out, it’s become popular in all sorts of classes for a number of reasons:

  • It’s cheaper than the other leading brand:  In going through 128 reviews Sage sent me, I realized that the only thing all 128 reviewers agreed on was that price was a factor. I asked Sage if I could just write whatever I wanted if we re-titled the book: “Filak’s Five Dollar Book of Mass Com Stuff.” The answer was a hard “no,” but we did get the print edition to come in below other books like it. Even BETTER, the rental costs for digital copies are less than one-third of the cost of the print edition (especially if you go through Sage reps) and then there’s an even BETTER version of this….
  • The Vantage Advantage: “Nobody reads textbooks,” is what I keep hearing from instructors, who are actually desperate to get students to read the stuff in the book. Sage has built an entire digital system called Vantage that can plug into your Learning Management System (BlackBoard, Canvas, D2L or whatever people are calling it) so you can assign kids stuff digitally, track their efforts and generally oversee the class like the guy in “Sliver.” In addition, you can toggle how you want to spot-check the kids on their reading. There are quiz questions attached to various sections of the readings and other analytics that help you help them to learn. Even better? It’s cheaper than a print book. By a lot.
  • The “Crazy Vinnie Guarantee” is Still in Effect: If you missed it when the book launched, here’s a look back at the insane things I’ll do  to help you either make the book work for you or to get you set up to use someone else’s book. Seriously. I’ll make someone else’s book better if you want.

If you’re interested in giving this book a look, feel free to hit me up through the contact page or go directly to Staci Wittek at: staci.wittek@sagepub.com

And one last item…

MEDIA WRITING UPDATE: Just before my former editor Terri left Sage, she told me that if my books worked out as well as she knew they would, I’d be writing a book a year for her for the rest of my life. If she’s as prescient about everything as she was in making that statement, I’d like to follow her around at an off-track-betting parlor some day…

This leads us to the upcoming edition of “Dynamics of Media Writing.” The “OG” book in the “Dynamics” series is in process as we speak. The goal is to have it to a copy editor by February, proofs done by April and out the door by August of this year. As is the case with the Reporting book, there will be AI additions, new pros and a ton of extra stuff. I’ll keep you posted as we go.

Thanks again for all of this. Without you all, these books would be dead after one edition and serving as a coffee coaster in the grad-student lounge.

Vince (a.k.a. The Doctor of Paper)

Dodging Deep Fakes and Facts, Fake News and Helping Your Students Navigate the Media Landscape

I get to work with the best people in the world. The stuff we do with Sage is so cool. They also don’t mind using a really old head shot where I look like I have a relatively decent head of hair. Also, don’t click here. It’s a screenshot. The link is below…

Today’s post is one of those long-time-coming situations in which I was working with the folks at Sage to talk about the issues pertaining to misinformation, AI deepfakes and other such things that we all thought would benefit students. When the chance to do a podcast on the topic came up, I leaped at the chance.

The conversation between Tim Molina and me was an amazingly fun and informative time for both of us. Tim is one of the Sage faculty partners and an assistant professor of mass communication at Northwest Vista College in San Antonio. He is also the faculty advisor for the NVC Student Podcast, WILDCAST.

We did this prior to the election, so some of the stuff might be a tad dated, but we finally got the OK to finish production and put it out. (Special thanks to Vicky Velasquez and Amy Slowik at Sage for getting this arranged, recorded and published.)

If you’ve got 44 minutes and 13 seconds to kill, click this link and enjoy!

 

 

 

An 8-minute video primer on Artificial Intelligence, its impact on media writers and the ethical concerns it raises for media folks

One of the biggest things I’ve tried to get across on this blog is that it’s here to help media students and professors. (If it helps other people, hey, I’m glad that works out as well…) The other big thing I’ve tried to get across is that if you need something, all you have to do is ask and I’ll probably get it done.

Case in point, a professor down in Texas and I were chatting about the “Exploring Mass Communication” textbook we’d sent her and some other issues, when she emailed me this:

I do have a question for you: Do you have any recorded lectures or videos where you talk about AI in journalism? I would love to take a look and see if I can incorporate it into my class. My current class size is 220, and the classroom is not very adaptable to an interactive Zoom call, which is why I wanted to see if I could use a pre-recorded video. I also teach an online version of this class which is just as large, and the video would be very helpful.

I reached out to Sage and asked if we could do a more “production-savvy” video than just me recording this in the pinball man cave at my house, and they were totally enthusiastic. We got it done for her with two weeks to spare and it worked well.

The nicest thing is that Sage sent me a copy, so I uploaded it to YouTube and here it is if you want to use it:

 

(I hate the fact I keep looking down, but this is what happens when your script glitches on the screen and you have to use the printed backup…)

If it’s helpful, let me know. Also, if YOU want something for any of your classes that fit into whatever area of expertise I supposedly have, feel free to hit me up here. I don’t care if you’re using my books or not. I just like helping people.

Have a great rest of your day!

What happens when police use AI to draft their incident reports?

(We’re not quite here yet, but it’s a little disconcerting how I keep finding parallels between RoboCop and reality. Also, that Kurtwood Smith was somehow less threatening here than in “That ’70s Show.”)

THE LEAD: Some police organizations are experimenting with AI, in which ChatBots are writing the first drafts of their situation reports based on what the officers’ body cameras capture.

“They become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate,” said Axon’s founder and CEO Rick Smith, describing the new AI product — called Draft One — as having the “most positive reaction” of any product the company has introduced.

“Now, there’s certainly concerns,” Smith added. In particular, he said district attorneys prosecuting a criminal case want to be sure that police officers — not solely an AI chatbot — are responsible for authoring their reports because they may have to testify in court about what they witnessed.

“They never want to get an officer on the stand who says, well, ‘The AI wrote that, I didn’t,’” Smith said.

The pilot programs have found that the reports that once took 30-45 minutes to draft can be done in a matter of seconds. To kind of hedge their bets on the issue of how much they should be leaning on the technology, some departments are using the AI on misdemeanors and petty crime.

Aside from the idea that the computer might be doing the officers’ “homework” for them, legal scholars and civil-rights activists are concerned about the impact this could have on society as a whole:

“I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing,” said Ferguson, a law professor at American University working on what’s expected to be the first law review article on the emerging technology.

Ferguson said a police report is important in determining whether an officer’s suspicion “justifies someone’s loss of liberty.” It’s sometimes the only testimony a judge sees, especially for misdemeanor crimes.

 

DOCTOR OF PAPER HOT TAKE: Accuracy and legality lead the list of my concerns here. At one point in the article, the officer notes that the AI included a detail he didn’t remember hearing. That could be the AI capturing something real or it could be fabricating something that the officer then kind of adopted as true.

Experts and users have found AI can engage in “hallucinations” where it presents something untrue as fact. It’s kind of funny when AI tells us that the downfall of Western Civilization began when the coach refused to put Uncle Rico in at quarterback in the ’82 finals. It’s less funny when it tells a court of law that you threatened a cop who pulled you over for speeding.

The officers interviewed for the story mention that they’ve become more verbal in their interactions with the public, which allows the body camera to capture that information and thus improve the AI report.

In this kind of case, it feels more like transcription than creation, which seems safer, but who knows. What would be beneficial for reporters in cases like this would be to get the AI-based reports and the officer’s body-cam footage to do a side-by-side comparison.

Legally speaking, I would be curious to know what levels of access journalists could have to the AI version of a report as well as the final version of a report. Police reports and court documents are public records, but some internal memos and drafts of public items can occasionally be considered off limits. In addition, it’s technically not being created by a public figure, but it’s the ramblings of a computer program. Who can have access to what, when and where and how is interesting here.

It’s also interesting to see how well these things hold up in court compared to other reports, witness testimony and so forth. As with anything new, there’s going to be a learning curve and development issues, with the older technology probably still being better.

When we first started seeing automobiles, they could barely break into double digits in terms of their mph speed. Meanwhile, horses could literally and figuratively run circles around them. As time went on, cars clearly became the faster mode of transportation, but it took a while. It’ll be interesting to see how many lawyers start asking questions like, “So, Officer Smith, did you write the initial report of this or did you rely on artificial intelligence to do it for you?” and then showing off all the stupid things AI has written to undermine AI’s credibility.

The folks in the article who distrust the AI process have noted concerns about racial targeting and other such issues in terms of bias against people traditionally mistreated by legal wrangling. We have seen AI generate some of those kinds of biased reports here, and it is a valid concern. I would probably go a step beyond this, only to say that I’d be really concerned in general for anyone who is being accused of criminal activity while the police are working the kinks out on this system. The article notes that the crimes are generally “low level” but that doesn’t make me feel much better if I’m on the other end of an AI disaster.

 

Help me help you figure out why students use AI to do their work and what it would take to get them to stop

help me help you | HELP ME, HELP YOU! | image tagged in help me help you | made w/ Imgflip meme maker

If you missed Wednesday’s post, we spent a good amount of time talking about what motivates (or deters) students in their use of generative artificial intelligence when it comes to coursework. You can take a look at the ideas behind the motivational theories and how they apply to this issue, but I know the big question is the simple one:

“What can I, as an instructor, do to make them want to do their own work instead of relying on AI to churn out a word salad of content they send in at the last minute?”

This might be the most Pollyanna answer you get all day, but here we go:

“Let’s ask them.”

And this is where I need your help.

I built a Google doc that has two simple questions on it:

  • What kinds of writing assignments do you (or would you) use programs like ChatGPT to do the work for you? What reasons do you have for using the program instead of doing the work independently? (e.g. “I ran out of time,” “I was bored by the work,” “The work was too hard.”) Please expand on your answer as much as you would like.
  • What would motivate you to NOT use generative AI programs like ChatGPT to do your written work for you? In other words, what makes you more willing or able to do the work independently? (e.g. “It would be cheating.” “I’m afraid of getting punished.” “I like what I’m asked to do.”) Please expand as much on your answer as you would like.

(UPDATE: Based on the request of a colleague, I added one more: In what ways have you used AI to facilitate your independent work? How do you wish teachers would allow AI as the tool it could be?)

It collects no private data, it doesn’t require them to log into Google to do it and I’ll have no idea where any of this came from. It’s simply two short answer questions meant to figure out WHY they do (or don’t do) something so we can look for potential solutions based on the research people have done on things like motivation and task completions.

If you want to help me help you, here’s the link:

https://forms.gle/WH9nzpHNT2XbMX5KA

If you have to (or want to) offer extra credit for this, I put a “code phrase” on the completion page you can ask them to give to you.

I’m going to let this run for a couple weeks and see what we get. If we get responses enough to make some soup out of this, I’ll put together something to share with you all a couple weeks after that.

In the meantime, hang in there and keep the faith. We got your back.

Vince (a.k.a. The Doctor of Paper)

What motivates students to use Generative AI and what would motivate them not to?

(The classic scene from “Back to School” that is both outdated and exactly the problem AI presents to us today.)

As most of us have already begun the semester, are headed toward the start of the semester or are in the process of panicking about the semester, we’re booting up the blog to tackle one of the bigger concerns we all seem to have these days.

THE CORE PROBLEM: As the semester began, I started seeing a lot of posts like this one from a friend and longtime college mass com professor:

I am up at nearly 2 a.m., going back and forth about whether to remove a writing component that I’ve used in almost every course I’ve taught over the last 10 years. It’s usually worth 25 to 30 percent of the course grade.
But it’s a massive waste of time to grade writing assignments that have been completed via generative AI.
The alternative? Reading quizzes, blue book responses, heavier emphasis on creative (group) projects, etc. Writing exercises are an opportunity for students to demonstrate the depth and originality of their thought.
The advent of Generative AI seems to be rendering useless a lot of the writing assignments professors have relied on for eons. As students began relying on AI to write their pieces, professors sought AI-detection tools to sniff out the fake stuff, leading to an escalating arms race between improved AI and improved AI detection.
Two years ago, The Atlantic published an essay titled “The College Essay is Dead,” noting that AI would likely challenge our approach to higher ed in ways we were incapable of understanding and dealing with. Journalism professors who once scoffed at this as being more of a “gen ed” problem are now finding AI-written content popping up in their own classes. It has also made several embarrassing forays into the profession itself, with some media chains using it to replace human writers altogether. With AI expanding rapidly to the point in which recorded lectures can be uploaded and integrated into the AI responses and AI helping you to sound less like AI, it can feel like we’re totally screwed.
MOTIVATION TO USE OR NOT USE: I’ve been studying psychological motivation for almost 25 years now and you can find a ton of reasons why people do or don’t do something. I still consider self-determination theory and its motivational spectrum as my bible for such things, including this situation.
Here are the four general motivational pivot points most of us have for doing (or not doing) something:
  • Extrinsic: We are compelled by an outside force to do or not do something. Think of a stick or a carrot as being the sole reason for completing a task: Your parents gave you $5 to cut the grass. Your parents threatened to ground you if you didn’t clean your room. This is the lowest form of motivation and it leads to the worst outcomes overall. This is where “cheating” or corner-cutting usually occurs. So, you pay your little brother $2 to cut the grass and claim the work as your own to get the $5. You take all the stuff that’s messing up your room and cram it in your closet, instead of taking out the garbage and putting the dirty stuff in the laundry etc.
    • NEWSFLASH: This is where we normally are for dealing with stuff like AI, in that we tell the kids in the class not to do it or else they’ll get a zero, fail the class, get expelled or experience whatever this is.
  • Introjected: We are compelled to do something based on motivation that is not entirely ours, but we do it because we feel we have to. Think of guilt or shame as the reason for doing something and you’ve got a handle on this one. You want to go hang out with your friends, but your parents convince you to visit your aunt instead because “she’s probably going to die soon and it would break her heart not to see you one last time.” You don’t feel strongly toward either presidential candidate, but your favorite teacher tells you about “how many people died for your right to vote,” so you cast a ballot.
    • NEWSFLASH: Guilt is a hell of a motivator and this really does work in a lot of cases, particularly for high-engagement people. However, people who are most likely to cut a corner or cheat are those most immune to this form of motivation. In other words, guilting people into avoiding AI for written assignments will work for students who are on the fence about cutting the corner, particularly if there is a strong affinity for you as a professor. However, the people most likely to cut the corner are going to do it, regardless of how much guilt you lay on them.
  • Internalized: We are compelled to do something because we see a benefit in it. Think about a nursing student taking the NCLEX test: They don’t like the test or all the work it requires, but they see value in becoming a nurse and therefore work really hard to pass it. This is one of the better forms of motivation, as the person is geared toward seeing a reason for doing what they’re doing, even if it’s not what they want to do. In short, they own the motivation and value the outcome.
    • NEWSFLASH: This is really the sweet spot for most educators, as it’s more successful than guilt and less Pollyanna than what we’ll discuss next. The underlying issue here is to tell people WHY they’re doing what they’re doing so they can internalize that motivation.
  • Intrinsic: We are compelled to do something because we really like it. This is why my dad sits at the kitchen table for hours doing word searches and why my wife can knit or needlepoint for days without wanting to stop. They really love it.
    • NEWSFLASH: If you can find a whole classroom full of kids that are intrinsically motivated, take a picture for the rest of us.

SO WHY AI? If what we’ve outlined above is true, and about 60 years of research from people way smarter than me says it is, the key to preventing students from AI-ing their homework and calling it good comes down to a few potential things:

  • The work is too hard, so they rely on outside assistance to get it done.
  • The work is too easy, so they figure they’re not missing something by letting AI do it.
    • (NOTE: The concept of flow by Csikszentmihalyi says people are most likely to enjoy an activity and persist in it when the difficulty is just slightly outside of their normal range of ability. In short, if we can feel just a little bit of stretch, we feel motivated to continue. If not, we are bored or frustrated.)
  • Other activities are preferential to the one we use AI to complete.
  • The work provides no inherent or perceived value. (a.k.a. “busy work.”)
  • Lack of repercussions.
    • (NOTE: Carrots and sticks count here, but so does the “So what?” element. In other words, if the kid doesn’t learn about the intricacies of The Council of Trent, what difference will that make in real life? However, if the nursing student doesn’t learn proper titration of drugs into an IV line, they might kill someone.)
  • Other, unknown things we aren’t thinking about but they are. (I’m always amazed at the things I DON’T know when it comes to my students and their reasoning behind doing or not doing something. This includes everything from having seven roommates and one bathroom to getting a ton of tattooing, despite telling everyone how broke they are. This is a true consequence of being old, I imagine…)

So the question obviously is, what is the best way to go about trying to figure out what to do about AI based on all of this.

In tomorrow’s post, I’ll give this a shot, but I’ll need your help.