What motivates students to use Generative AI and what would motivate them not to?

(The classic scene from “Back to School” that is both outdated and exactly the problem AI presents to us today.)

As most of us have already begun the semester, are headed toward the start of the semester or are in the process of panicking about the semester, we’re booting up the blog to tackle one of the bigger concerns we all seem to have these days.

THE CORE PROBLEM: As the semester began, I started seeing a lot of posts like this one from a friend and longtime college mass com professor:

I am up at nearly 2 a.m., going back and forth about whether to remove a writing component that I’ve used in almost every course I’ve taught over the last 10 years. It’s usually worth 25 to 30 percent of the course grade.
But it’s a massive waste of time to grade writing assignments that have been completed via generative AI.
The alternative? Reading quizzes, blue book responses, heavier emphasis on creative (group) projects, etc. Writing exercises are an opportunity for students to demonstrate the depth and originality of their thought.
The advent of Generative AI seems to be rendering useless a lot of the writing assignments professors have relied on for eons. As students began relying on AI to write their pieces, professors sought AI-detection tools to sniff out the fake stuff, leading to an escalating arms race between improved AI and improved AI detection.
Two years ago, The Atlantic published an essay titled “The College Essay is Dead,” noting that AI would likely challenge our approach to higher ed in ways we were incapable of understanding and dealing with. Journalism professors who once scoffed at this as being more of a “gen ed” problem are now finding AI-written content popping up in their own classes. It has also made several embarrassing forays into the profession itself, with some media chains using it to replace human writers altogether. With AI expanding rapidly to the point in which recorded lectures can be uploaded and integrated into the AI responses and AI helping you to sound less like AI, it can feel like we’re totally screwed.
MOTIVATION TO USE OR NOT USE: I’ve been studying psychological motivation for almost 25 years now and you can find a ton of reasons why people do or don’t do something. I still consider self-determination theory and its motivational spectrum as my bible for such things, including this situation.
Here are the four general motivational pivot points most of us have for doing (or not doing) something:
  • Extrinsic: We are compelled by an outside force to do or not do something. Think of a stick or a carrot as being the sole reason for completing a task: Your parents gave you $5 to cut the grass. Your parents threatened to ground you if you didn’t clean your room. This is the lowest form of motivation and it leads to the worst outcomes overall. This is where “cheating” or corner-cutting usually occurs. So, you pay your little brother $2 to cut the grass and claim the work as your own to get the $5. You take all the stuff that’s messing up your room and cram it in your closet, instead of taking out the garbage and putting the dirty stuff in the laundry etc.
    • NEWSFLASH: This is where we normally are for dealing with stuff like AI, in that we tell the kids in the class not to do it or else they’ll get a zero, fail the class, get expelled or experience whatever this is.
  • Introjected: We are compelled to do something based on motivation that is not entirely ours, but we do it because we feel we have to. Think of guilt or shame as the reason for doing something and you’ve got a handle on this one. You want to go hang out with your friends, but your parents convince you to visit your aunt instead because “she’s probably going to die soon and it would break her heart not to see you one last time.” You don’t feel strongly toward either presidential candidate, but your favorite teacher tells you about “how many people died for your right to vote,” so you cast a ballot.
    • NEWSFLASH: Guilt is a hell of a motivator and this really does work in a lot of cases, particularly for high-engagement people. However, people who are most likely to cut a corner or cheat are those most immune to this form of motivation. In other words, guilting people into avoiding AI for written assignments will work for students who are on the fence about cutting the corner, particularly if there is a strong affinity for you as a professor. However, the people most likely to cut the corner are going to do it, regardless of how much guilt you lay on them.
  • Internalized: We are compelled to do something because we see a benefit in it. Think about a nursing student taking the NCLEX test: They don’t like the test or all the work it requires, but they see value in becoming a nurse and therefore work really hard to pass it. This is one of the better forms of motivation, as the person is geared toward seeing a reason for doing what they’re doing, even if it’s not what they want to do. In short, they own the motivation and value the outcome.
    • NEWSFLASH: This is really the sweet spot for most educators, as it’s more successful than guilt and less Pollyanna than what we’ll discuss next. The underlying issue here is to tell people WHY they’re doing what they’re doing so they can internalize that motivation.
  • Intrinsic: We are compelled to do something because we really like it. This is why my dad sits at the kitchen table for hours doing word searches and why my wife can knit or needlepoint for days without wanting to stop. They really love it.
    • NEWSFLASH: If you can find a whole classroom full of kids that are intrinsically motivated, take a picture for the rest of us.

SO WHY AI? If what we’ve outlined above is true, and about 60 years of research from people way smarter than me says it is, the key to preventing students from AI-ing their homework and calling it good comes down to a few potential things:

  • The work is too hard, so they rely on outside assistance to get it done.
  • The work is too easy, so they figure they’re not missing something by letting AI do it.
    • (NOTE: The concept of flow by Csikszentmihalyi says people are most likely to enjoy an activity and persist in it when the difficulty is just slightly outside of their normal range of ability. In short, if we can feel just a little bit of stretch, we feel motivated to continue. If not, we are bored or frustrated.)
  • Other activities are preferential to the one we use AI to complete.
  • The work provides no inherent or perceived value. (a.k.a. “busy work.”)
  • Lack of repercussions.
    • (NOTE: Carrots and sticks count here, but so does the “So what?” element. In other words, if the kid doesn’t learn about the intricacies of The Council of Trent, what difference will that make in real life? However, if the nursing student doesn’t learn proper titration of drugs into an IV line, they might kill someone.)
  • Other, unknown things we aren’t thinking about but they are. (I’m always amazed at the things I DON’T know when it comes to my students and their reasoning behind doing or not doing something. This includes everything from having seven roommates and one bathroom to getting a ton of tattooing, despite telling everyone how broke they are. This is a true consequence of being old, I imagine…)

So the question obviously is, what is the best way to go about trying to figure out what to do about AI based on all of this.

In tomorrow’s post, I’ll give this a shot, but I’ll need your help.

One thought on “What motivates students to use Generative AI and what would motivate them not to?

  1. I have no idea of what THE answer is, but in my courses (public speaking, MASS COMM, media writing, announcing) I encourage students to include their opinions in short writing assignments, or what I name “quick-writes.” This may be the worst idea of all time, but I feel from what I have seen, if students can feel safe in providing their own perceptions, they become invested in doing the work and eventually gain more that regurgitating sentences from the text.

Leave a Reply