If we could translate this into Latin, I think we’d have a replacement for the journalist’s motto.
Since AI isn’t going away any time soon, journalists and journalism educators are in a bit of a bind when it comes to how best to use it or to help students use it appropriately. This week, we’re doing a three-part series on the blog this week that take that “overhead” view of generative AI from three key angles:
-
- The tools
- The potential perils
- The human angle
You can click on the links above for either of the first two parts. The final part is below and a lot more complicated: The human factor.
As we’ve noted before, generative AI is a tool. As such, it can be used for good things or bad ones, based on the person using it and why they’re using it. Let’s consider a few aspects of human nature that can help make AI a useful friend or a mortal enemy:
WHAT’S MY MOTIVATION?: One of the luckiest breaks I ever caught in life was getting into a psych course on human motivation as part of my doctoral program. It not only hooked me up with Kennon Sheldon, a professor with an incredible history of publishing important psychological studies, but it also helped me to better view how people choose to act or not act based on motivating factors.
One of the aspects of self-determination theory (SDT) we looked at was the spectrum of motivating factors that influence people. As it is a continuum, it has a wide array of motivational forces, but scholars have identified four key “stopping points” along the spectrum that capture specific forms of motivation:
- Extrinsic: You are motivated by an outside force to do something that you otherwise wouldn’t want to do. An example of this might be when your parents told you to clean your room or you would be grounded for the weekend.
- Introjected: You are motivated to do something through coercive actions of others, such as guilt or shame. An example of this might be skipping a concert you want to attend so you can visit an elderly relative in a nursing home because your parents told you, “Well, she probably doesn’t have much longer to live and I KNOW how hard it would be for you to live with yourself if you didn’t at least see her a few more times.”
- Internalized: You are motivated by finding value in the outcome of an activity, even if the activity itself isn’t all that enjoyable. An example of this would be a nursing student studying really hard for the NCLEX test. They don’t like the idea of the test, but are motivated by their desire to become a nurse, which requires them to pass this test.
- Intrinsic: You are motivated by the natural joy of the activity. This is the purest and best form of motivation. An example of this would be Amy and her approach to knitting. She never seems to care if she finishes a project, but rather she enjoys the act of knitting as well as the joy she gets from trying new patterns, new yarn and new needles.
In looking at this, it’s a pretty safe bet that most of what students do for classes in a lot of cases comes down to extrinsic motivation, as they are required to write essays, do assignments and take tests because someone else is forcing them to do it. (Insert your favorite joke about Gen Ed classes here.)
When forced to do something, people will take the easiest way out possible, which is where generative AI comes in. If the only goal is to “get through” whatever someone is forcing you to do, you’re going to let AI take the wheel and just get it done. This is particularly true for those 800-person pit classes, where students figure they have a pretty good shot of getting away with it.
Helping students find motivation to do things that will benefit them in both the short term (pass this class) and the long term (get a better career) will be enough to move them toward a more self-motivated state.
This leads into the second human issue…
THIS MATTERS BECAUSE: We have found that people are more likely to value things if they have an understanding as to why these things are supposedly important. In short, “Why do I have to do this?” A key part of motivational research is what is called autonomy support: If you can give people choices or at least explain why they have to do something and that a choice isn’t possible, they tend to adopt the motivation as their own and do better at it.
When you give them a “Because I’m a PARENT, that’s why!” answer, well, they tend to really hate it and extrinsic motivation rears is ugly head.
I had this discussion/argument with people from our history department. The university was cutting several requirements that were making it difficult for kids to graduate on time and/or with multiple majors/minors/certificates.
One of the “forced classes” was a history one, and I overheard the history folks talking about the situation. One person thought it was a good thing, because maybe the kids would then find history on their own and choose it for a minor. A senior faculty member argued this would be a disaster because, “The kids will NEVER choose us.”
I nosed my way into the conversation and asked to what degree they explained the value of their specific classes to students who were taking them. The senior faculty member was offended: “How could you ask such a thing? Do you think history DOESN’T have value?”
Um… No… but if that’s how you approach your classes, I understand why the kids might feel that way.
The more I kept trying to explain that if you want the students to value something, you have to tell them why they should value it, the more I seemingly upset this man. Apparently, in his way of thinking, history is so damned important and he was such an expert-level vessel for this knowledge that there was no need to provide a rationale for the coursework he put forth. At the end, I left him with this:
“I know that kids can take my class because they want to or because they are forced to. Either way, I make sure to tell everyone what we’re doing, why we’re doing it and what value it has to them in moving forward in their lives. They may still dislike the thing I’m making them do, or they may disagree with my assessment of the situation, but they at least understand the ‘why’ and that tends to make them more willing to do the work than to try to weasel their way out of it.”
Understanding what motivates people to do (or not do) things can help us figure out how to help them use AI without abusing AI. Help them understand why they need to do an assignment, or understand a form of writing or complete a task and they’ll likely find a way to buy into your argument.
If we CAN’T show them a “why” answer, maybe that says more about what we’re doing wrong as educators than what is wrong with “kids today.” Is that 10-page paper really helpful to the kid, or are you just so used to grading them that it would be an inconvenience to you to change? Is the textbook you are using a good one, or do you not want to revamp your whole class and redo all of your lectures, so you stick with what you have? Is the kid really losing out on some major life skill or element of citizenry if they let AI take the wheel on an assignment, or is there a better way to make AI part of the process?
Logic has a lot to say about helping people find value in what they need to do and becoming motivated to do the right thing. Unfortunately, logic isn’t always in driving the bus, which leads to the last element…
PEOPLE ARE BASIC: We can talk about this from a variety of angles, but the long and short of it is that humans are base-level creatures in a lot of ways.
Our minds are geared to be “cognitive misers,” which is why we find ourselves in mental ruts and often relying on stereotypes to make sense of the world around us.
We are social animals, which leads us to social dominance behaviors that tend to have some folks trying to assert their high value over other, “less valuable” individuals within the collective. If you don’t believe me, hang out for three hours in a middle school classroom and get back to me.
We’re also driven by some baser instincts in regard to our physiological needs. Or, to quote Jeff Foxworthy, a lot of people would like to get a beer and see something naked. (Pretty much every technological development related to media in the past 50 years has in some way gotten either a significant start or a major boost due to that base-level drive for naked stuff.)
Generative AI takes these base-level human drives and supercharges them in a way that other forms of technology haven’t. Whether it’s trying to prove dominance, trying to be lazy or trying to a pervert, AI puts it all and more at the fingertips of the worst among us.
AI chatbots have been linked to false claims of harassment by political figures in New York. Why would anyone think to do a prompt for the chatbot that included the term “sexual harassment?” There’s not a good argument in here, other than some idiot saying, “Hey, y’know what would be HILARIOUS? Ruining someone’s life!” (Here’s a link to a trailer for “The Anti-Social Network” that pretty much encapsulates the whole process by which general dumbassery becomes a toxic weapon when added to a digital platform.)
This is why you have stories like this one about a MIDDLE SCHOOL in California, where students were using AI to turn images of their friends into nude pictures. Or like this one about the students at a CATHOLIC HIGH SCHOOL using AI to create naked photos of their peers. Y’know, just like Jesus would have done to Mary Magdalene, if he had the technology…
Even pedophiles are getting in on the act, which is the start of a sentence that never ends well…
As with most things in the world, the worst among us will use AI to do some godawful stuff. I’m not sure we can avoid that, but we do have a need to instill in the other 90% of the world the reasons NOT to be like those folks.
DOCTOR OF PAPER HOT TAKE: Every time I think of what AI could be used to do and what it’s ACTUALLY being used for, I go back to this scene from “Idiocracy.”
The issue remaining for journalists and journalism educators is how do we go about making AI the tool in the toolbox it can be while avoiding the perils of a society that is slowly riding a Dumpster fire to hell.
The question we need to answer is this: What kind of coherent argument can we, as journalism instructors, make to the students who are in our class that it is in their overall best interest to do the work we assign them instead of letting a machine do it for them? And, how can we present that argument in a way that they will understand it, agree with it and motivate themselves to abide by it, in the face of the human frailties we’ve discussed earlier?
I don’t have an answer for this, but if you figure it out, please post the answer below.