Help me help you help your students: Exploring Mass Com is up for a second edition

“It’s a real book!” and it’s aging, so let’s get the next edition rolling with your help.

 

The good folks at Sage took time out of their busy Tuesday to reach out with a conference call and tell me that my latest textbook was something of an anomaly. “Exploring Mass Communication” was closing in on Year Three in the market and most first editions tend not to do particularly well, they explained. That makes it a tough sell to the powers that be when authors and editors want to pitch for a second edition.

(I lived that experience once with another publisher. The book was not popular enough to merit an improved second edition, while still selling well enough for them to not sell me back the rights to shop it elsewhere. It took 11 years for me to get another bite at the apple, and that was after 10 years of begging…)

In the case of “Exploring Mass Com,” Sage was all gung-ho about getting a second edition to market to make sure it stayed both relevant and popular. The folks set me up Tuesday with a production team and a timeline, meaning we’ll have the next edition of the book out the door by January 2028.

I can’t thank you all enough for the help you’ve given me over the years, both in suggesting content and in adopting my books. Without you all, I’m basically producing exceptionally expensive coffee coasters and door stops. I’m always grateful when someone puts their faith in me and my work to take a chance on something I’ve done and I always want to let folks know that. I also want to make sure I’m meeting expectations.

WHAT WE ARE DOING ALREADY: 

The next edition of the book is in revision mode and we’ve already got a few updates planned for it that should help keep up with current events:

The AI Chapter: One of the first things I pitched was adding a new chapter on artificial intelligence and its impact on media. When I started working on this book about 112 years ago, we weren’t at a point where we were still confusing AI and VR and other bits of alphabet soup. Now, obviously, things have changed.

We’ll go with the same pattern in the chapter as those that were in the first edition: A little historical backstory, a look at the important pioneers, a deep dive into its impact on us as media consumers and a look at the careers that exist now, thanks to growth in the field. We’ll also have some exercises and other goodies to make the chapter appear like it’s been there the whole time.

Law Chapter: A lot has happened in terms of what the law says and what the courts have done in regard to media folks and their rights. We’ll be digging into new cases, adding examples and providing folks with a clearer view of the world of both paper law and trial law.

Data and Example Updates: Each chapter will get a refresh as far as the facts and figures related to the topic at hand. This will help shape discussions in class with a little more “spruced up” data as well as the ability to draw from relevant time frames for the students. No matter what we do in textbooks, examples and data tend to get old fast. With that in mind, we’ll hang on until the last minute to plug in those pieces and give you the freshest look at what the world looks like.

Increased and Improved Visuals: When we started the first edition, we had a certain amount of money set aside for photo and graphic permissions. As I have no idea what anything costs, other than Mustang parts, broken pinball machines and 1956 Topps Baseball Cards, Sage kind of “translated” that amount for me into the number of images we could buy with it and how that would break down across the chapters.

However, a funny thing happened to Mustang parts, pinball machines, baseball cards and photo permission costs between when I agreed to do the book and when we actually had to buy the permissions: Costs went through the roof. However, no one bothered to tell me or my editor that until we were already in production.

At one point, a permissions editor reached out and told me, “You know you’ve used about a third of your budget already and we’re only on Chapter 2…”

Nope. Didn’t know that. So we had to make do.

This time, however, we know what kind of hand we’re playing with from the jump and unless the Strait of Hormuz impacts the cost of photos, we should be able to better estimate things and get you some more and improved visuals.

 

WHAT I NEED FROM YOU ALL:

One of the best parts about running the blog is that I actually get to hear from people who have seen my stuff and have some suggestions for help. In one case, a professor has been sending me emailed notes about what he’s doing with each of my chapters and what he hopes I might integrate into the next edition. Rest assured, I’m definitely looking into each and every suggestion to see what I can do to make the book more of “your book” than “my book.”

That said, I could use even more help from a wider array of folks, so here’s the pitch: I need a couple favors.

FAVOR ONE: TELL ME WHAT TO FIX, CUT OR LEAVE. I’ve heard from folks over the years who tell me, “Y’know, your book would be great, if only you had X.” For those people, I try my best to do something with the blog to patch that perceived hole, as by the time they notice something is missing, the book is already in production.

I’ve also heard the, “Why did you get rid of X? I loved that thing!” The reason is usually either a) the concept aged out of being useful or b) someone else told me to kill it and I couldn’t think of a reason to argue.

So, if you’re using “Exploring Mass Com,” or have looked at it but gone elsewhere because of any reason whatsoever, please tell me what you like, what you hate and what I need to do to make this better. You can post on comments below or reach out through the Contact Page.

Any feedback is helpful feedback, so please don’t be shy.

FAVOR TWO: TAKE THE CHAPTERS FOR A SPIN: Every time I pitch a book or pitch a revision, the chapters I write go through a vigorous vetting process that involves experts in the field like you all. Sage has a running list of people who have volunteered to critique chapters when I have them ready for a looksee and they provide me with a lot of great feedback.

If you want to make an impact on how the book looks, this is the best place to start in a lot of ways. Sage provides you with the chapters and a brief survey about what you think. (I think they give you like a ham sandwich and a recognition in the preface of the book, but it also counts for service in a lot of places, people have told me. In my way of thinking, it’s a heck of a lot better than serving on the Committee for Determining Committee Assignments for Committee Work or something…)

If you’re interested, hit me up as well and I’ll get you on my pal Charles’ List of Awesomeness, and he’ll reach out when the time comes.

 

Thanks again for all your help with all of my books and for trusting that my weird way of communicating will somehow make sense to your students.

Vince (a.k.a. The Doctor of Paper)

Reuters editor Jaimi Dowdell talks about the data team’s recent look at AI failures in medical procedures, how to do deep-dive stories and the importance of growth through discomfort as a journalist

The front page of the Reuters story on AI and medical usage.

The debate over artificial intelligence has dotted the pages of the blog significantly over the past year, with posts discussing the benefits some of these tools provide media professionals, as well as those mocking a number of the errors.

A recent deep-dive article from Reuters, however, took a look at the application of AI to the medical field, with some concerning results:

In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.
The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.
Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.

Among the names listed in the byline of this story is Jaimi Dowdell, one of the pros from the “Dynamics” textbooks and a frequent friend of the blog. 

Dowdell is a deputy editor of the data team at Reuters, a team she’s been part of for about eight years. In 2021, she received a Pulitzer Prize for Explanatory Reporting for the team’s look at qualified immunity. Prior to her time at Reuters, she worked for the St. Louis Dispatch and Investigative Reporters and Editors.

As this story was making the rounds, Dowdell was nice enough to share some insights about the story itself and her approach to investigative journalism. Below is a transcript of an email interview, edited for length and to make my questions sound less like they came from a goober:

 

We have talked over the years about how every story, big or small, has kind of an ideation point. What was it that had you all looking into the issue of AI and medical issues that led to this big story? (and how long did it take you to get it done?)

“I love this question because I think it’s fascinating how stories start. Rarely do we say, “I’m going to do a story about X” and it works out.

“This story began with a conversation with one of my colleagues. He had spent the past six months looking into issues surrounding artificial intelligence and he was interested in how patients were using AI chatbots to self diagnose. His idea of AI in the medical world got me thinking that AI must be impacting medicine in more ways.

“I had worked with the U.S. Food and Drug Administration’s database that tracks malfunctions and adverse events involving medical devices and knew that could be a resource. I then discovered the FDA had a list of AI-enabled medical devices. I started exploring the data and it wasn’t long before we felt like we had a story. We moved quickly from there. The entire process from reporting to editing and production took six months or less.”

What were some of the bigger “road blocks” you hit along the way and what made them problematic? How did you work around them or how did the inability to get past them impact the story?

We didn’t have many major roadblocks for this story. One of the challenges, though, was the technical aspect to the topic.

My colleagues and I spent a lot of time talking to people about the technology that goes into AI-enabled medical devices. I think we really had to understand that before we could write about it.

About ten years ago (damn, I got old fast…), when we were talking about your “Secrets in the Sky” series with the Boston Globe, you were telling me how there is a point where you have to figure out if the story is really there/worth it? You also mentioned that as a reporter you almost have to have a kind of “obsession” in getting the work done. Was that the case here for you and the team or was it something different this time?

“One of the best parts of my job is the time I’m given to explore topics. It’s a luxury that not many journalists have and I don’t take it for granted. During that exploration period, I often follow many leads and attempt to run down curiosities and questions. Then there’s that point we’ve talked about where you had to decide whether there is a story and it is worth it. For this story, that moment came quickly.

“Over the years I’ve learned that if you don’t have to work too hard to find examples that illustrate the trend or concept you’ve uncovered, there’s a good chance there is a story. There have been times when I’ve worked on ideas where it seems almost impossible to find more than one example. Does that mean it isn’t a story? Not necessarily, but it does indicate there may be a better use of your time.

“That wasn’t the case here. The growth we found in AI-enabled medical devices was the first indication there was a story. From there, it wasn’t too difficult to find examples of devices where potential issues had been flagged. Then, because all of this involves public health, it seemed an important and worthy topic in which to invest.

“And yes, there was an obsession. There’s almost always an obsession! I think that’s another indicator on whether the story is good or not. If I’m not ruminating about a story while exercising or waking up in the middle of the night with an idea, I might not have a great story. Is it healthy? Probably not. Each time I do a story like this, I think, ‘next time I won’t get so sucked in.’ But it always happens. I can’t speak for my colleagues on this story, but based on how often we spoke at all hours of the day, the obsession was there for them too.”

A lot of student journalists find a passion for doing bigger projects, but often find it to be frustrating because of a lack of resources or a lack of time. This might be especially true for those trying to meet work expectations of grinding out X number of pieces a month for student media or making class deadlines set by unreasonable professors. What kind of advice do you have for student or beginning journalists who want to try doing some of the deeper stories like the ones you do?

“The demands of unreasonable professors and the churn of student media is likely not that different from the professional world. There is a lot of news and there are a lot of demands on journalists these days. I didn’t start out getting to spend so much time on stories and neither did most of the journalists who do this type of work. But it can be done. I was able to dig deeper on stories even as a student and there are ways to get to do this type of work. Here are some suggestions:

  1. Don’t wait for permission. Too often, I see student or beginning journalists look to professors or editors to give them an assignment to dig deeper. If you wait, it’ll never happen.
  2. Be curious. Many of my stories started out because I just wanted to know something. Turn your own curiosities into a question or a hypothesis that reporting can help you answer and test. Do not think, “I’m going to do a story on this.” Instead, just work on being curious and looking for answers. The stories will reveal themselves to you in time.
  3. Start small. Instead of trying to do a big story right off the bat, include investigative elements into your regular work. For example, look for data that can help enrich a daily story. Start looking for source documents as you report. These small victories can lead to big stories down the road.
  4. Don’t be afraid of hard work. This type of journalism can be tedious and frustrating. If you’re feeling like it’s too hard, you might be doing it right.

If you could tell a student journalism kid anything about anything that you think would be helpful to them these days, what would it be? What would you like them to know?

“I’d say that none of us ever feels like we really have things figured out.

“I remember when I was in college, I looked forward to the days when I would know exactly what I wanted to do with my life. Guess what? It turns out we always feel a bit uncertain and we’re all basically kids.

“So get out there and embrace feeling uncomfortable. Whether you end up in journalism or something else, the growth, I’ve found, is often in the discomfort.”

Cleveland Plain Dealer honcho Chris Quinn writes off criticism of his AI passion project as “uninformed outrage,” while still being wrong about almost everything, including college journalism programs

An early photo of Chris Quinn reacting to criticism of his views on AI, journalism schools and journalism professors. 

 

THE LEAD: Chris Quinn, the VP of content for the Cleveland Plain Dealer, must have a really tired arm from patting himself on the back, or whatever else he does for self-congratulations, as he’s back with another column about the awesomeness of his staff’s use of AI:

The first wave of responses was from regular readers, and most were positive. Several thanked me for showing how we use AI to expand our offerings while maintaining quality.

I suspect I receive little negative feedback about AI now because I’ve written about it so often. I know the anxieties it causes. That’s why I explain how we use it, assure you we are not replacing jobs and promise that humans stand behind everything we publish.

As for anything that might challenge his assumptions, well, Quinn doesn’t have time for that crap:

(A) cranky journalist in another state took offense and on Monday ranted on social media about my practices being the ruination of journalism. Much bombast by others followed.

Or, so I’m told. I didn’t read any of it. I have no time for uninformed outrage on social media channels.

(EDITOR’S NOTE: I was not the “cranky journalist” Quinn spoke of, clearly, because my “bombast” and “uniformed outrage” was published Thursday. Just want to clear that up.)

In the end, Quinn gave all of his supporters a good pat on the head before trying to shame anyone who wasn’t fully on his side:

For those who wrote to say they understand and admire what we’re doing, many thanks. To those who wrote to criticize it, I suggest you look to history to understand that the only path forward is adaptation.

Or, keep stomping your feet until you don’t have a leg to stand on.

CATCHING UP: Quinn wrote an extensive column last week, praising the use of AI as a tool that allowed his staffers to do more reporting and zero writing in some under-covered enclaves of the paper’s circulation area.

In doing so, he decided to take several potshots at colleges, college professors and college students, saying they were doing everyone on Earth a massive disservice by decrying the value of AI, or outright ignoring it.

This led to at least some of the backlash against him, including the piece I wrote here that made what I consider to be three clear, well-reasoned and well-supported arguments:

  • Quinn is wrong about journalism programs not teaching AI or telling students that AI is the devil.
  • AI is a tool that still has a lot of kinks to work out, and it has proven to need some extensive oversight in its current form.
  • The content the PD is producing from its “Report it all, let AI write it up” leaves something to be desired in terms of quality.

In his most recent missive, Quinn didn’t deal with almost any of these criticisms, but then again, I really didn’t expect him to. In reading through his 14 previous letters on AI, I’ve found kind of a pattern in his views on AI. Two broader underlying premises really underscore why I’d love to play poker against this guy:

The Law of the Instrument: The concept has been around for generations, but it’s often attributed to Abraham Maslow, and it basically states that if the only tool you have is a hammer, you treat everything like it’s a nail.

Quinn has so bought into the premise that everything can be done with a strong set of reporters and an AI grist mill, that it’s clear all stories are getting done this way on those beats. The underlying problem is that not all stories can be done well this way.

Quinn mentions things like people wanting to know the score of the Browns game or the outcome of a vote from some board in Lorain. These stories are great for AI to just crank out.

OK, fine, but what about that story of the teacher who donated bone marrow? Or obituaries? Or other stories in which details matter and storytelling can make a difference? These things get ground up and spit out in a bland way that really undermines the quality of the work the reporters have done.

Quinn isn’t alone in this, as I remember having an argument with a broadcast professor during the “convergence phase” of journalism. I noted that some stories were better done in print or online while other stories were better done in broadcast. He argued anything I could do for a newspaper, he could do just as easily for broadcast.

I mentioned things like budget stories that needed mathematical depth and lacked a lot of visuals for video. He told me how he would take video of people typing on keyboards or how he would throw a copy of the budget on the table and film that.

The underlying point in both cases is the same: Yes, you CAN do a story this way but it doesn’t follow it’s the BEST way to do that story.

Often Wrong, Never In Doubt: I heard this phrase in a documentary on financial investors, where a short-seller explained that certain people are very good to bet against because they lock in on an idea and refuse to be dissuaded, regardless of the reality surrounding them.

When they are wrong, but overly confident, they’ll pour vast sums of time and money into risky things that end up going wrong, thus benefiting the people who clearly saw the inherent flaws in those things. Quinn fits this to a T.

In reading through all of his letters to the public, never once did he demonstrate one iota of caution. It was, “This is the greatest thing since sliced bread, so you better get on board.” I seem to remember that same pitch being used to market Theranos, cryptocurrency and MLMs.

Even when I’m really certain on something, I’m always open to the option that I might not be right. If Chris Quinn brought me to the PD, showed me all the great stuff he’s doing, demonstrated how they’d backstopped AI to prevent any catastrophic failures and presented data on how great this was serving his readers, I’d be happy to give this whole experiment another look. I believe paranoia is my best friend, so I’m looking out for risks and willing to say I’m wrong.

Quinn’s most recent letter just drips with hubris, belittling anyone out there who hasn’t fallen in line while ignoring the issues a bunch of us have raised here (particularly those about how crappy J-school is). It’s telling that after a letter in which he basically said professors suck and J-schools suck, he added this tidbit to the end of his latest missive:

Note: I mentioned a student last week who withdrew from job consideration because of our use of AI. Some readers concluded the student attends Syracuse University. That’s not the case. Actually, Syracuse’s Newhouse School of journalism, a valued partner for us, teaches about AI in journalism. Leila Atassi, one of our editors, will be on the Syracuse campus in a few days to help coach students in how we use AI.

That’s exceptionally tone deaf for two key reasons:

  • You spent an entire column telling everyone that college journalism programs ignore or hate on AI and then without an ounce of irony, mention how great Syracuse is and that they’re doing some awesome AI stuff.
  • You crapped all over journalism degrees and how they’re worthless, yet you’re dispatching Leila Atassi to Syracuse to teach these kids. I wonder what her background is… Oh… Yeah…

And, I’d like to say I believe in Leila and her abilities, if for no other reason, than she went to a hell of a good journalism school for her master’s (and she was actually one of my students for a while.)

In any case, while Chris Quinn thinks I’ll be here stomping my feet until I don’t have a leg to stand on, I’ll actually be watching to see what happens as the PD’s Icarus keeps flying higher and higher on his AI wings.

Cleveland Plain Dealer honcho Chris Quinn took time out of his busy schedule to crap all over journalism schools about their views on AI, despite not actually knowing their views on AI

I hope the computer-based journalism helpers Chris Quinn is putting his faith in work better than the Cleveland Plain Dealer website. I tried to buy a subscription to view his diatribe about journalism schools and AI, only to have a spinning wheel of death show up for about a day or so…

THE LEAD: Chris Quinn, the VP of content for the Cleveland Plain Dealer, took a victory lap over the weekend, bragging about how he removed the writing requirements associated with journalism via “AI specialists,” while also telling journalism schools they suck:

Because we want reporters gathering information, these jobs are 100 percent reporting. We have an AI rewrite specialist who turns their material into drafts. We fact-check everything. Editors review it. Reporters get the final say. Humans — not AI — control every step.

By removing writing from reporters’ workloads, we’ve effectively freed up an extra workday for them each week. They’re spending it on the street — doing in-person interviews, meeting sources for coffee. That’s where real stories emerge, and they’re returning with more ideas than we can handle.

<SNIP>

Journalism programs are decades behind. Many graduating students have unrealistic expectations. They imagine themselves as long-form magazine storytellers, chasing a romanticized version of journalism that largely never existed.

That’s what they’re taught.

 

DISSECTION TIME, PART I: Let’s look at both Quinn’s arguments as well as take some time to disprove them, starting with his view of students and journalism programs:

The Strawman Student: Quinn’s piece begins with an exemplar of how students suck these days, especially because we teach them poorly at every journalism school in the country:

A college student withdrew from consideration for a reporting role in our newsroom this week because of how we use artificial intelligence.

It reminded me again how college journalism programs are failing to prepare students for the workforce.

I don’t have a reason to doubt Quinn that this kid exists, but I also have no reason to trust him. I’d like to see the withdrawal letter/email/voicemail the kid sent and I’d probably also like to talk to the kid.

See, Chris, sometimes people tell you stuff that isn’t true, like “I really wish I could make it to your party” or “The break up isn’t about you, it’s about me” or “It happens to a lot of guys and it’s not a big deal.”

Maybe this kid didn’t want to work for someone who saw their role in the newsroom as feeding grist into a mill for a robot overlord. Maybe they actually enjoyed writing, so giving up the part of the job they like wasn’t worth it to them. Maybe, and I say this as a huge fan of the sports teams, they didn’t want to move to Cleveland.

Could be a lot of things, but blaming it solely on your AI policy helps you nicely set up your argument that journalism schools suck.

 

The Incorrect Overgeneralizations: The bigger problem here is the leap from this one kid not liking something to all journalism programs failing all of the kids out there all of the time. Even if we pretend that this one alleged kid was so allegedly horrified at the Plain Dealer’s amazing-as-hell AI set up that they had to pull out immediately, it doesn’t follow that all kids in all schools are taught to hate AI. This is called negative social stereotyping.

Even if that feels like a bit of hyperbole, let’s at least agree that not every kid who comes out of a program is the exact same in terms of quality, maturity and expectations.

Also, I think we can agree that not every journalism program is created equal, so while the kids at University A might be using smudge pots to ward off the evil spirits used to power AI, kids at University B might be getting some good data journalism help, transcription services and other goodies, courtesy of AI.

Then again, maybe we can’t agree, given this generalization:

Like many students we’ve spoken with in the past year, this one had been told repeatedly by professors that AI is bad. We heard the same thing at the National Association of Black Journalists convention in Cleveland in August. Student after student said it.

Chris, did you bother to dig a bit deeper on this, because there are a few nuances that merit consideration. First, who were these professors? Were they in journalism or were they in departments where they’ve gotten used to grading 500-word essays that AI can now crank out in 18 seconds, thus putting the fear of God into these people?

What kind of AI was under discussion? Traditional AI? Generative AI? Did the professors state that certain AI programs are less helpful than others, or that relying solely on whatever content AI puked up was dangerous from a fact-based standpoint?

Did the professors explain the “black box” and “hallucination” concerns about AI? Did the professors show them example after example after example of how AI completely screwed the pooch, thus trying to help them see that you can’t just turn it loose and hope for the best? 

I’m also curious, given your disdain for journalism programs, where did the amazing Hannah Drown and Molly Walsh garner their educational pedigree that mixed the poli sci, business and non-profit knowledge you desperately want kids to have?

Oh… Yeah…

Given their background, I’m wondering how Hannah and Molly feel about this proud declaration you made:

Fortunately for those of us who know exactly what skills we need in applicants, AI has altered the landscape so dramatically that we don’t need journalism school grads.

We don’t need any damned JOURNALISM GRADUATES… Except, of course, the two we hired to do this work that we’re so proud of…

 

The Erroneous View of J-Schools: I’d like to know how many journalism programs Quinn visited in the past five years. A five-year span would cover the time frame where artificial intelligence would have become relevant enough for schools to start embracing a relatively stable set of AI tools.

I’d put the over/under at about three schools, and I’d advise people to take the under.

There are likely colleges that are shunning AI, but clearly many more are embracing specific aspects of these tools.

CUNY has an entire AI Journalism lab for professionals to come back and learn the ropes. Northeastern University is diving into the research and practical ends of AI with its AI Literacy Lab. The Medill School at Northwestern University has its Knight Lab to work on AI and media. Arizona State University has put a ton of resources into its work on AI and news innovation. University of Northern Colorado built a production course that teaches students how to meld AI and journalism effectively.

Stanford, UCLA, Atlantic International University, Florida and Columbia are just a few of the other schools that have Journalism-based AI courses on the books, and those are just the ones I found on through a cursory search. That’s not even counting all the programs (ours here included) that have infused AI into the current courses we have, so we can demonstrate the value of the tools while we teach caution as well.

(NOTE: If your school or your class does some AI stuff, feel free to pipe up in the comments section. I bet we could really make a run at the record for most comments on the blog.)

I not only teach about artificial intelligence in my classrooms, but I also include chapters on it in my books and provide basic exercises to educators that showcase its strengths and weaknesses.

What we have here is a collection of facts, supported by links to additional information. I’d like to think that’s a bit stronger case than Chris Quinn’s “Old Man Yells at Cloud” approach to generalizing about what’s wrong with journalism schools today.

 

The “Road Less Traveled” Advice: Quinn’s ignorant view on J-school is problematically compounded by his educational suggestions for kids who want to enter his glorious newsroom:

If you’re a student considering journalism, I’d skip that degree. Study political science. Learn technology. Understand how government, businesses and nonprofits work. Take communications law and ethics as electives. Skip much of the rest.

 

Got it. Just like you did back in the day! Right, Chris? Oh… Wait…

I don’t know if he’s going to be on College of Media and Communication Dean David Boardman’s Christmas card list this year, but I’d love to see Boardman’s reaction to this column… 

 

Aside from the “do as I say, not as I did” thing, if I wanted to tank a kid’s future, I’d pretty much tell that kid to do exactly what Quinn is saying here.

Technology changes so rapidly that whatever the kid learned in freshman year would likely be obsolete by graduation. You can learn tools, but it’s important to know the broader ways in which they should be applied to further your skills and connect with your audience. For example, in my day, we didn’t major in Quark XPress. We majored in design, used the tool in conjunction with our broader understanding of the field and then adapted to technology changes.

In addition, there’s a reason the phrase “Why try? Go Poli Sci” is still heard in the halls of many academic institutions. It’s also much more likely to be in the “paper law” as opposed to the “trial law” end of the spectrum. I’m not saying a certificate, minor or even double major in this is field is bad, particularly if you want to take your media skills into the political realm. However, you’re not making it to a newsroom solely on a steady diet of Politics and Genocide or Western European Politics courses.

I’d also like to know where Quinn thinks students are getting their interviewing skills, their social media experience or their general reporting knowledge in this newly formed major he’s promoting here.

Being forced to meet people takes effort, particularly based on how today’s generation of students has grown up in a digital-first, post-pandemic, borderline-anthropophobic world. Research suggests that nearly 45 percent of Gen Z men have never asked someone out on a date in person, so if Quinn is assuming this fresh crop of potential folks can do this without some reporting courses (still a thing) or other forced socialization, I’ve got some unfortunate news for him.

Also, com law might not matter much any more, if what’s happening in the real world is any indication…

 

DISSECTION TIME, PART II:  With that out of the way, let’s pick apart Quinn’s views on artificial intelligence and the glorious way in which it has drastically improved his newsroom:

AI! It’s FANTASTIC! (Usually):  Quinn has gone all-in on AI, which is always dangerous when it comes to a new technology. Actually, it’s usually dangerous in any situation, given that most new ideas suffer a lot of growing pains before they eventually become valuable, but so much less so than what was expected.

Still, he’s a fan:

Artificial intelligence is not bad for newsrooms. It’s the future of them. It already allows us to be faster, more thorough and more comprehensible. It frees time for what matters most: gathering facts and developing stories to serve you.

Anyone entering this field should be immersing themselves in AI.

I’ll buy faster, but I’m not entirely sold on the other descriptors here, given what we’ve seen AI mess up already. Dare I say Quinn is “chasing a romanticized version” of this technological marvel.

AI has fouled up a ton of content in some pretty awkward ways, including calling a guy “useless” in his own obituary, misnaming the city in which an NFL team resides, cliche-festing local sports stories and screwing up an entire development plan in a local news story. That’s not counting the number of times people got tricked by AI sources or generally misled by AI-generated content.

 

Words, Words, Words…: Quinn seems to take an almost perverse level of pride in how much content his staff members can grab and how none of them has to do any actual writing any more:

By removing writing from reporters’ workloads, we’ve effectively freed up an extra workday for them each week. They’re spending it on the street — doing in-person interviews, meeting sources for coffee. That’s where real stories emerge, and they’re returning with more ideas than we can handle.

I get that it’s important to do deeper reporting, spend more time with sources and connect with the communities journalists cover. However, the question becomes, “How much of all that good will and strong effort is wasted if you just toss everything in an AI blender and then watch the content move along like you’re “Laverne and Shirley” at the Shotz Brewery?”

Plus, and maybe Quinn doesn’t give a damn, but I’ve found that when I invest a lot in the reporting, I tend to care about the story I want to tell. That usually leads to some stronger, more engaging pieces based on well-crafted writing.

Being a writer isn’t a negative, particularly if you want to write for the benefit of an audience that is interested in what you have to say. I think I’m qualified to say that, given everything I sit down to write has me thinking, “Who would want to read this and what would they want to know?”

I’m not sure if AI has gotten to that point yet, but I know good writers have.

 

Quantity over Quality: I forget what movie it was in, but there was a scene in which prisoners were told, “We’ve got good news and bad news. The bad news is that all we have for your dinner tonight is horse manure.” When someone asks, “So what’s the good news?” the official replied, “There’s plenty of it.”

Which brings us back to the Plain Dealer’s Bin of AI Content…

A quick look at the list of stories Hannah Drown put together recently provides some sense of the quantity. Each day she appears to be on the job, a handful or more stories with her byline show up. She’s got coverage of events at the Lorain County Junior Vocational School, a UAW strike in the area, a pop-up shop at the Lorain Community College, a school lockout in Elyria and more. The volume is there.

The quality, however, leaves something to be desired.

These are mostly stories that could have easily come from a press release rewrite, featuring a “Hey, come check out this new thing” approach. These lack depth and nuance, not to mention any level of critical thought. The stories have overly long sentences, generally lack flow and are as dry as a popcorn fart.

For all the bragging Quinn does about reporters getting a chance to sit with sources, meet for coffee and chat these people up, most of the content comes straight from documents, not people. A look through more than a dozen of these pieces revealed virtually no direct quotes or specific references to interviews with these salt-of-the-earth individuals.

For example, a story about a school teacher who donated bone marrow to a complete stranger half a world away would seem to be exactly the kind of piece that would engage readers through amazing storytelling. Instead, we get this lead:

LORAIN, Ohio — Valentine’s Day usually arrives with candy hearts and roses, but this year, one of the clearest acts of love connected to the holiday came without flowers at all.

We get no direct information from the teacher about the experience, nothing from the folks at the National Marrow Donor Program talking about the value of the program and nothing from people who have had their lives saved through some of these selfless acts.

The story has zero quotes in it and reads like a “how-to manual” for getting on the bone marrow registry and donating it to someone. Boring doesn’t begin to cover it.

I’m not entirely sure I can blame Drown for this, as it is her job to just shovel content into the front end of the pipeline. It’s also not stated to what degree AI did any work on this (or any other) piece in her clip file, which I’d consider a bit of an ethical concern.

What I can say is that if my name were on these things, I’d want the writing to be a lot better than it is. As we’ve noted before, AI essentially creates an average of EVERYTHING it takes in, regardless of quality, and this definitely feels like “C” writing.

What goes unsaid in Quinn’s magnum opus is that people now have an abundance of media outlets at their disposal that provide vast sums of content. Journalists have to grab people by the eyeballs and hang onto them in a way that distinguishes their work from the noise.

This is where quality writing and keen storytelling come into play and where the generic “held a meeting” leads that AI can churn out will fail.


(FINAL NOTE: I’m sure Quinn would be horrified at the amount of time I spent writing this piece, given his “crank ‘er out” philosophy. I’m fine with it, though, because I believe dedication to one’s craft matters a lot, even if the point is just to tell someone they’re full of crap.)

Tell me how to help people with money I might not actually get: A look at the Anthropic AI lawsuit and its $1.5 billion settlement

As if this semester hasn’t been weird enough, I got this email from a colleague on Monday:

In case you hadn’t seen this, Anthropic is being sued for copyright infringement.  Two of your books were swept up by them, and you are entitled to file a claim for damages: https://www.anthropiccopyrightsettlement.com/ 

 

Abiding by the “if your mother says she loves you, go check it out rule,”  I did a search on the site and found that he was right.

I’m honored that someone considers my work worthy of theft…

It’s Doctor of Paper 2, AI Pirates 0, apparently:

In one of the largest copyright settlements involving generative artificial intelligence, Anthropic AI, a leading company in the generative AI space, has agreed to pay $1.5 billion to settle a copyright infringement lawsuit brought by a group of authors.

<SNIP>

The settlement, which U.S. Senior District Judge William Alsup in San Francisco will consider approving next week, is in a case that involved the first substantive decision on how fair use applies to generative AI systems. It also suggests an inflection point in the ongoing legal fights between the creative industries and the AI companies accused of illegally using artistic works to train the large language models that underpin their widely-used AI systems.

 

BACKGROUND: Anthropic trained its AI using a ton of content, including a boatload of books and other copyrighted material. In the case of things that were open to the public or properly purchased, this was apparently fine, based on the “fair use” doctrine associated with copyright.

The argument the lawyers for Anthropic made was that the training of AI on these books was a transformative effort, meaning that the books themselves were changed into something else entirely through this process. Transformative acts have often been protected as fair use for years and it’s why Google could digitize books as part of a search-engine service and Andy Warhol could present Campbell’s soup cans to the world.

(It’s also why Roy Orbison is likely spinning in his grave over 2 Live Crew’s version of “Oh, Pretty Woman” or why we get thumbnail images before clicking on a link to visit “Perfect 10” magazine, so maybe it hasn’t always been the greatest of things… )

That worked for a lot of the content they fed the AI beast, but unfortunately some of the stuff they fed it came from sites that pirated copies of texts:

(The judge) also found that Anthropic had illegally acquired millions of books through online libraries like Library Genesis and Pirate Library Mirror that many tech companies have used to supplement the huge amounts of digital text needed to train A.I. technologies. When Anthropic downloaded these libraries, the judge ruled, its executives knew they contained pirated books.

Anthropic could have purchased the books from many sellers, the judge said, but instead preferred to “steal” them to avoid what the company’s chief executive, Dario Amodei, called “legal/practice/business slog” in court documents. Companies and individuals who willfully infringe on copyright can face significantly higher damages — up to $150,000 per work — than those who are not aware they are breaking the law.

 

If this dude thought getting the books the legal way was a “slog,” he should try writing a book once…

In any case, I reached out to Sage and they are on this, noting I should be getting a letter or email from them to explain what to do and how to fill out a claim form. News stories noted that authors could get up to $3,000 per text, but I’m pretty darned certain there’s no way I’m getting that.

Sage is really the aggrieved party in this, given that the folks there put in the “slog” to get this book built, shipped, marketed and in the stores in time for the Christmas rush. There’s a mention of royalty percentages, so I might get like 5-10% or whatever of whatever the actual amount is. Then again, I might get nothing.

That said, let’s do the thing we all do when we buy that Mega-Millions ticket: Plan to spend money we might never get…

FUN WITH MONEY: As I noted on the “About” page, comedian John Oliver is my spirit guide in everything I do here. One of the things I love most about “Last Week Tonight” is when Oliver does something incredibly weird to sponsor something he finds particularly important.

It’s why he bought Russell Crowe’s leather jockstrap from the movie “Cinderella Man” and stationed it in one of the last remaining Blockbuster Video stores in the country. It’s why he wrote a book about Vice President Mike Pence’s pet rabbit (Marlon Bundo) and turned it into a fundraiser for the Trevor Project and AIDS United. He even managed to buy the website “John Oliver’s Junk” and use it for an auction that raised more than $1.5 million to support public broadcasting.

I’m sure I lack that kind of star power and I might end up getting $50 and a ham sandwich out of this, at best. Still, not for nothing, but Oliver’s weird fundraising efforts got a Koala Chlamydia Ward named after him, so let’s reach for the stars on this one…

Here’s the deal: Whatever I get, I’ll see if Sage would be willing to match it. Then, whatever we scrape together, we’re gonna do something with it that you think is fun, weird, good or all three and more.

Either post below or use the contact form on the website to tell me what you want me to do with my pirate’s booty, whatever of that I actually get.

A few thoughts came to mind already:

Honestly, it could be anything, or nothing if we get shut out. The point is, let’s plan to do something to commemorate this one time where the words “Vince Filak” and “lawsuit” is a cause for celebration, as we make a point to help someone or something important in a random and oblique way.

Thanks for reading as always.

Vince (a.k.a. The Doctor of Paper)

“The Tool Doesn’t Know It’s Hurting You:” Learning user responsibilities in working with Artificial Intelligence

(I’ve done a number of dumb things with tools. This one, thankfully, has never occurred to me.)

On a relatively frequent basis, I find myself with a new cut, ding, gash, burn or other similar wound as a result of my hobbies. I nearly clipped the top of my thumb off with a Dremel, put a nice slice through the back of my calf with a carpet cutter and slammed my hand onto a piece of sheet metal so hard, Amy could see the tendon that manipulates my thumb.

I’m not alone in my quest for inadvertent body modification, as two of my uncles managed to saw off their thumbs while reaching across table saws. One got his reattached, while the other ended up being only able to count to 9.5 for the rest of his life.

These and other similar moments remind me of something my father told me that his father told him about needing to respect the tools of our trades: The tool doesn’t know it’s hurting you.

In other words, a sander is going to sand when you power it up, regardless of if it’s sanding off a layer of wood or a layer of your fingers. The drill is going to drill a hole through something, whether it’s helping you remove a spot weld on a piece of sheet metal or giving your hand the look of stigmata. And saws are going to cut, and they won’t really know the difference between a tree limb or one of your limbs.

That’s why you always have to understand the purpose of the tool, treat the tool with respect and protect yourself from the harm that the tool can do to you, because it really doesn’t have any skin in the game, unless it’s cutting through yours.

In teaching media writing, I’ve often made the analogy that every skill we cover is another tool that the students get to put in their toolbox. The more tools they have and the better they practice with them, the stronger their work output will become. In covering AI this week, I reinforced that concept with the the analogy outlined above: AI is a tool, neither good nor bad, and you need to understand what it does or doesn’t do before you start playing around with it.

To that end, here are a few suggestions I gave to the students regarding the proper use of AI that I hope might help your folks as well:

USE THE TOOL AS INTENDED: I’ve had a number of bad breaks along the way when it came to trying to use a tool in a way other than it was intended. I’ve broken countless drill bits when I used them on material that was too strong for their composition or tried to widen a hole by rocking the drill around. Neither of these moves were very bright, as I knew better.

That said, I’ve also used tools without thinking twice about how they were actually supposed to be used. For example, it took me a while to figure out why the glass kept breaking in some cabinets I’d refinished before I understood the point of using push-points instead of epoxy.

When someone develops a tool, that tool usually has a specific intended use. When you try to outstrip that purpose or make the tool operate in a way it was never intended to operate, bad things can happen. This is why it’s important to understand what each AI tool is intended to do.

For example, models of OpenAI were criticized for short-term responses and an inherent need to please people. In responding to each question or statement without a larger understanding of context, along with a stated goal of providing encouragement (while obviously trying to extend user conversations), the models led to a number of problematic outcomes.

When you are building content for consumption as a media professional, AI tools can be great things, but you have to understand what each one does and doesn’t do, lest you find yourself doing more harm than good.

USE THE TOOL, DON’T RELY ON THE TOOL: My great-grandfather was a carpenter and he actually built the house he lived in for the majority of his life. The ability to do this boggles my mind, as I can’t cut on a straight line worth a damn.

The even more incredible thing is that he did it in the early 1920s without the benefit of power saws, battery-powered drills or air-driven nail guns.

If he had those items, I’m quite certain he could have done the job even faster, but he was still skilled without them, making his work less about reliance on a tool.

As with most technological advances, AI can make things easier on us when we want to get things done. People who have mastered tasks like writing, photography, graphic development and more can now do things faster and better thanks to AI, but that’s mainly due to applying their underlying skills to these new tools

The folks who have mastered these tasks without AI are concerned about what will happen to people who CAN’T function without the AI doing the work for them. These are reasonable concerns, in that it’s never a good thing to become completely dependent on a tool of any kind, lest that tool become unavailable or in some other way problematic.

The best thing you can do in learning media skills is to use AI as one of your many tools, but not let it do the work for you. You need to pair your human nature with those tools to create things that go beyond whatever AI can spit out.

Learn the way in which you can make the tool work for you, and then apply it appropriately.

DO DIFFERENT WORK, NOT LESS OF IT: One of the most tedious tasks for me as a reporter was transcribing recorded notes. It seemed to take forever to get through a small section of an interview and I found myself having to go back repeatedly to get the quotes exactly right. When I learned of true transcription AI, like Otter.ai and others, I found myself falling in love.

The technology was great, it did a reasonably decent job and it took away a task that wasn’t really at the core of what my job entailed. That didn’t mean, however, that I saved myself from doing any work related to this task.

On more than a few occasions, the translation wasn’t perfect. Fortunately, I was able to play the recording again as I watched the text, so I could make changes to the quotes. In other cases, the quotes didn’t pan out as well as I thought, because they weren’t as pure as they likely would have been if I’d have been scrawling text and guessing at a few words. Thus, I had to find better quotes to fit the bill, knowing as I did what was and wasn’t entirely accurate. Although the net benefit was heavily in my favor, it wasn’t a 100-0 sum game.

AI tools do some forms of work for you, which is great, but it doesn’t absolve you of all responsibility. In many cases, it just shifts the work you have to do to something else. Think about moving from being a reporter to an editor in a student newsroom: You are no longer out there gathering facts or bugging people for interviews. Instead, you are asking questions of the reporter, poking holes in the story and generally making sure the reporter is sure.

Take the same approach to AI when you are employing a tool: Check the transcript carefully to be sure it wrote what someone actually said. Check each fact the same way you would if Johnny or Janie Freshman wrote it in their first story for the paper. Scour the material for holes based on your own understanding of the concept, rather than accepting the AI version as gospel.

There are obviously more things you can do to keep yourself on the right side of AI, but like the application of most tools, practice will improve performance and care will limit unintended consequences.

And probably save your credibility from needing a bandage or two.

Do students need to memorize things anymore? (A Throwback Post)

Rote memorization was a large part of my education and my life as I grew up. The nuns had a way of smacking the hell out of you if you couldn’t remember all 50 states or their capitals. We also got put through the paces on our “times tables” with speed and accuracy showing equal value at that point.

Beyond that, we had to memorize a number of crucial things like our locker combinations and crucial phone numbers for home, grandma’s house and our friends.

(If you don’t believe me, ask anyone over the age of 40 what their home phone number was and they probably still have it committed to memory. Even more, when I was a kid, I would always call my buddy, Mark, who lived across the street to see if he could come out and play. Fast forward to me being in my late 30s and needing to have someone check on my dad. I still remembered that number, so I called his parents’ house at that same land-line number and got the help I needed.)

Today, we lack the need for such things in so many ways. I honestly have no idea what my kid’s phone number is, as my phone tells it to me. I also don’t have email addresses or websites memorized, as they are auto-filled or replaced by apps.

So, is memorization dead, and if so, is that OK? That question took on new meaning when I saw a couple opinion pieces in the Wall Street Journal. The first by professor Alex Green, talked about the ways in which AI has robbed his students of the ability of important thinking skills:

these core skills are no mystery. They involve an ability to sift through information and understand who created it, then organize and pull it together with logic, reason and persuasion. When teachers dream of our students’ successes, we want to see these skills help them thrive.

For that to happen, students must gain the ability to synthesize information. They must be able to listen, read, speak and write—so they can express strategic and tactical thinking. When they say AI is eroding their ability to speak and write, this is what they’re losing, often before they’ve ever fully gained it.

As much as I totally feel what this guy is saying, I can understand how students (or AI-proponents who aren’t students) could dismiss this as, “OK, Boomer” level complaints really boil down to a professor feeling less important than usual.

However, the second piece, by WSJ ed board member Allysia Finley, has me a bit more concerned about what AI is doing to younger brains through “cognitive offloading:”

The brain continues to develop and mature into one’s mid-20s, but like a muscle it needs to be exercised, stimulated and challenged to grow stronger. Technology and especially AI can stunt this development by doing the mental work that builds the brain’s version of a computer cloud—a phenomenon called cognitive offloading.

<SNIP>

Why commit information to memory when ChatGPT can provide answers at your fingertips? For one thing, the brain can’t draw connections between ideas that aren’t there. Nothing comes from nothing. Creativity also doesn’t happen unless the brain is engaged. Scientists have found that “Aha!” moments occur spontaneously with a sudden burst of high-frequency electrical activity when the brain connects seemingly unrelated concepts.

With that in mind, I go back to this early question and I wonder what you have to say about it as professors, journalists and generally smart reader-type folks:


Is memorization a necessary skill for college journalism students?

I know this might seem like a click-bait headline or like I have the answer to it, but this is an honest question for my fellow J-folk out there.

The reason I ask is because I heard a number of students grousing in my writing class about a gen ed course they all are taking that requires them to do (what I consider to be) an insane amount of memorization for tests. The exams are between 80 and 120 questions each and are to be completed within two hours. They also allow no aids, such as notes or books.

Since most of my classes are skills-based, I tend to avoid multiple choice questions or exams that go this route. However, since I let the students pick their poison when it comes to in-class exams, we do have a mix of “write this” and “pick this” kinds of questions, including multiple choice. However, I let them have the AP style book and whatever notes and homework I’ve turned back to them. My rationale is that the point of this course is to help you improve your writing/editing/reporting/whatever, so learning from previous successes and failures is par for the course in our field.

However, I have plenty of colleagues who teach large pit classes with more dates and places kinds of stuff who do use the “choose A, B, C or D” kind of questions, some of whom allow notes while others don’t. Is one better than the other? I don’t know. That’s the point of my question here.

Here are a few caveats for the discussion:

  • I know some fields need memorization because looking everything up at the time in which the information is needed doesn’t work well. If you’re majoring in a language, fluid speaking, writing and reading are crucial, thus, memorization is at the core of what we do here. Also, when it comes to the medical field, I don’t want to hear my doctor or nurse saying, “I don’t know… Just Google it!”
  • I used to be of the “what if you CAN’T look it up” denomination of our field. The idea of quick recall mattered when you didn’t have an AP style book at hand or you couldn’t get to the clip files to look something up. Now, we all carry computers with us that can tell us everything we need. (And if you’re going to make the “What if you don’t have service?” argument, I’d counter with, “You’re probably going to be eaten by the “Hills Have Eyes” people, so not knowing when the Council of Trent happened is probably not a priority.”
  • I also used to be of the “You need the basics of our bible” kind of person as well. That meant a lot of AP memorization or at least knowledge of where to go in the book. I still force the kids to read the actual book in early classes so they know where stuff is or what is in there, but now everything is searchable for a reasonable subscription fee on AP. We also have dictionaries online. (It also makes less sense to memorize AP these days, since it seems like AP is changing rules at a maximum volume every year.)

What I’m looking at is the idea of forcing memorization in journalism classes and requiring gen ed classes of our majors that rely on this kind of approach to education. Is this the best path forward for our students? If so, why? If not, what should we do then?

I look forward to your thoughts in the comments or via email.

Eight Years a Blogger: Come for the knowledge, stay for the snark

It’s hard to believe this thing is still going after eight years, kind of in the same way its hard to believe that the almond-colored refrigerator with the faux-leather texture and Bakelite handle that your parents bought in 1983 refuses to die. I always figured Sage would have decided I was more trouble than I was worth by this point, or I would have run out of bits of wisdom, weirdly effective exercises and opportunities to mock god-awful mistakes in the media.

Oddly enough, that’s hasn’t happened. And speaking of exercises, if you still want to get in on Dr. Vinnie’s Bin of Exercises and AI Joy, feel free to hit the link here.

This semester is guaranteed to be a little off as far as the blog is concerned, in that I found out last week I will need to teach a fifth class this term. It’s the second of the five that I’ve never taught before in my nearly 30 years of college teaching and the third of the five that’s not in my area of expertise.

Why, you might ask… Well..

 

The relative insanity that this blog provides me might be my only salvation, so let’s get started with a few thoughts to brighten your day (and allow me to blow off developing a giant roster of PowerPoints and podcasts I will likely use only once in my lifetime):

 

STUIPD IS AS STUIPD DOES, TOO: In digging through a ton of examples I wanted to use for the upcoming classes I am prepping, I was stunned at the level of general incompetence when it came to making sure things were edited before they went out. I’m not talking about internet memes or mom-and-pop operations posting on an AOL-Dial-Up-Friendly website. I’m talking about actual organizations with money and staff support.

The number of missing words, misspellings and generally bad writing made it tough to find quality examples for the kids. I mean, I can’t exactly say, “Here’s a great press release, if you ignore the three misspelled words in the lead and the sentence structure that makes Tarzan look like Shakespeare.” Of all the blunders out there, I had to highlight this one:

If you are in the state, promoting the state and having a fair for the state, the least you can do is spell the name of the state properly in the headline…

Also, for the sake of irony, I found this job posting for an entry-level PR position with these two key bullet-points back to back. And I SWEAR I didn’t PhotoShop this:

I looked at it three times and thought, “Is this like one of those tests where they try to trick you? Like that one speed test where you are supposed to read the whole set of directions first, so that you figure out you only need to do the first thing on the list?

Or do they just really need proofreaders that badly?

Speaking of someone who needs a proofreader:

If you really need something that big to house that item, I feel sorry for your significant other…

 

DID THAT REALLY JUST HAPPEN? I’ve frequently noted that paranoia is my best friend, so much so, that I often find myself doing double-takes on things I swear I saw that turn out to not be as bad as I thought. It usually comes up when I see a sign for “angus” burgers or “first-hand jobs” or something where my mind drifts to the terrible error, even if there isn’t one.

That said, this Milwaukee Journal-Sentinel headline on my phone really should have freaked out a couple people somewhere at the newspaper:

For starters, that’s not Cavalier Johnson unless I have officially gone blind from computer monitor radiation. Here’s his official city photo:

I have no idea who the dude at the podium is, but Arnold Schwarzenegger and Danny DeVito made a more convincing set of Twins than the two people in the photos above.

Second, and this is really what caught me, that has got to be the worst headline break any human or computer could have made with this story. When I saw that “Johnson speaks with black talk,” I think my brain broke, before remembering Robert Townsend’s spoof of how white people do stupid stuff in Hollywood.

I understand that everything can’t be perfect in every publication, but I also know there are certain topics that need a little more attention and care, due to their sensitivity and the long history of insensitivity associated with them. This is one of those where someone fell asleep at the wheel.

Conversely, sometimes we can really go a bit far in clarifying things for our readers:

Thanks for the clarification, CNN. Otherwise, I might have been confused…

And finally…

I, (FILL IN NAME HERE), AM HAPPY TO HELP (FILL IN NAME HERE): As is the case every semester, I got a series of “could you please squeeze me into your full Writing for the Media class?” emails over the past couple weeks. The excuses are usually the same (I missed my registration day, I accidentally dropped it, I died while donating my heart to my cousin, but thanks to revolutionary bionics, I’m back now…) as are the ramifications they use to nudge me in their favor (I need this to graduate, I can’t move on with out the class, I’m planning to join a biker gang but they won’t take me without a bachelor’s…)

This one came oh so close to moving me…

Look, AI can be helpful in some cases, but your really gotta meet it halfway…

And off we go on another semester-long adventure. Let’s stay safe out there…

Vince (a.k.a. The Doctor of Paper)

As Promised, Here is Dr. Vinnie’s Bin of Journalism Exercises, Complete with AI Toys

It might not be as cool as opening Marcellus Wallace’s case, but I hope this pile of exercises will still make you as happy as Vincent and Jules.

 

As many of you are starting back to school, I figured I’d break radio silence with some goods I promised to deliver by the end of the summer. Back in May, I asked what kinds of things you needed from me and you were all nice enough to hit me with some ideas. Some were really concrete (“These kids need stuff to learn how to write a #%^#ing lead.”) while others were more nuanced (“They need to play with AI, but in a way that helps them see what it can and can’t do. I have no idea how that would work, though…”)

I’ve put together a bin of stuff that tries to cover the gamut of needs, while offering you choices as to how to apply the exercises in your own schools and for your own needs. Think of each of the files as kind of a Swiss Army knife of opportunities that you can use as you see fit.

It should go without saying that the content is either made up or fictionalized versions of stuff that actually happened, so don’t freak out if you see something and think, “Oh my lord! What hath hell wrought!”

Here’s a brief overview:

Breaking News/Ongoing Situation Stuff: A couple files contain information that is divided into several stages of release. The idea is that, just like a breaking news story, or an ongoing event, information is important compared to other information you receive.

It’s meant to mirror the “King of the Mountain” exercise approach we blogged about some time ago, with newer information forcing students to reconsider what’s most important in updating their content and giving them the critical thinking skills needed to do so.

There is also a straight-up crime story with some quotable material for you to play with.

Raw Materials Folder: There is a collection of nothing but interviews on topics that seemed relatively universal (Fires, Campus Illnesses, thought on Gen Ed classes, TikTok etc.) that students did over the years. I stripped out all the names/identifying features for this and left you with some red text where you can insert names that reflect your student body, places your students would know and so forth.

There is also a couple city council stories that are god awful that I rebuilt from local newspapers with name changes, area changes and so forth. Those can be helpful if you want them to work on revising bad copy or with some of the AI stuff we’ll talk about later.

Standard Story Stuff: I’ve put together a list of standard stories that we all tend to write in various types of journalism (breaking news, meeting/speech/news conference, localization etc.) with some suggestions regarding length and source count.

If it’s more of a reporting class, obviously, you can send them out to do stuff, but I included these for the media-writing folk who might not have students ready to go the full “Lois Lane” out there. These can meld nicely with some of the stuff in the raw material folder.

I also tossed in a “canned game story” for people who need to take a shot at sports writing before they cover a game. It includes stats, scores and post-game interview quotes that can be used to build a solid game recap on a pretty fun game.

AI Fun: I build several AI exercises that allow the students to see how AI can be really helpful and where it can fall short. It includes a trial run for them at something I did for the blog over the summer, involving interview questions. A couple of them also can be used on the raw material files to have AI build a story either before or after they do. This can show them how AI might or might not get the gist of what is important in a story.

HOW DO YOU GET THIS STUFF:

In normal times, I’d just post it here, but between AI and overly industrious students, I’m a little leery of just leaving a pile of stuff on the open web. So, to give you the goodies while protecting the pile a bit, here’s what you do:

Hit me up via the contact form on the blog, which is linked here. Just tell me your name, your school, your email address and anything else you think is important. I’ll then send you the unlock for the folders and you can go hog wild.

It’s all freebie and you can do whatever you want with it for your class. If you run into a better way to use this stuff, or have an idea you want to share, I’d appreciate it. Also, I don’t care if you’re using any of my books or not, so this isn’t an exclusive party for adopters. I just like helping people.

Have a great start to the semester. We’ll be back full time next week after Labor Day.

Vince (a.k.a. The Doctor of Paper)

 

 

Journalists: If your mother says she loves you, go check it out (and then be damned sure it’s true)

As the Russian proverb says, “Trust, but verify.”

The long-held adage of journalists saying, “If your mother says she loves you, go check it out,” needs a little more updating these days, as it seems like NOTHING is as real as it seems. Thanks in large part to corner-cutting, scam-baiting and general laziness, we’re finding a lot of cases in which it never hurts to make absolutely sure you are sure. Here are a couple examples:

 

AT THIS POINT, JUST ASK THE MAGIC 8-BALL:  A few months back, we highlighted Rob Waugh’s story about journalists being duped by AI “experts” who churned out content that ranged from generic to highly suspect. Waugh’s latest piece found that journalists who are using supposedly “legitimate” systems to connect with sources are also now at the mercy of AI spamming, all while paying for the privilege of getting screwed:

A PR agency is selling an AI tool that automatically answers pitches from journalists on services such as ResponseSource, HARO and Qwoted.

The AI tool, called Synapse (not be confused with PR pitching platform Synapse Media), “reads” questions sent for for expert comment by journalists via the services, then analyses sources such as books, podcasts and reports per query and uses AI to draft email responses.

Journalist-request services such as HARO charge a fee for connecting PR agencies with journalists. PRs can send out story pitches to journalists and also get access to requests for expert comment.

<SNIP>

The makers of Synapse, Lithuania-based PR agency Wellstone PR, boast that it has a 7-8% acceptance rate, and that used by a human “editor”, it can answer around 20 pitches per hour with one human PR person able to do the work of five.T

The company brags in its pitch to potential clients that it will provide them with fictional content that sounds so much like the real thing, journalists won’t know the difference. I don’t know if it bothers me more that a) it would appear journalists are getting inept/lazy enough not to notice that they’re being fed total bullpucky or b) that the PR professionals are writing such average, generic stuff so often that journalists can’t distinguish it from whatever garbage an AI can produce.

The cost for this service is a one-time fee of $2,500. The only saving grace is that they apparently haven’t sold this to anyone. Yet.

The PR experts quoted in Waugh’s piece are clearly not thrilled by this breach of trust.

Andy Smith, founder of Sourcee, which aims to offer credible, video-checked experts, says that using AI tools in this way erodes trust.

Smith said: “When journalists post a journo request, there’s an implicit level of trust in the person replying. They’re hoping to hear from real people with genuine insights, experiences, and expertise that can bring their story to life.

“They certainly don’t want to receive an automated, AI-generated reply… if that’s what they were after, they could’ve just used ChatGPT themselves.

One thing that has me rethinking my “stop using these stupid “OK-Cupid-For-Quotes” sites” was a point that Smith made about how a good expert pool, like the one he says he’s built, is meant to broaden the reach of journalists. That can prevent them from only reaching out to familiar contacts and ending up with the “usual suspects” in every story. Still, if I had to pick between usual humans and a random lottery of AI word salad, I’ll stick with my boring peeps, thanks.

 

WSJ PLUS AI EQUALS WTF: If Cliff Behnke isn’t spinning in his grave over this situation, it’s only because he’s actively crawling out of it to come smack the shit out of somebody right now:

A story about development plans for a vacant downtown block that appeared on the front page of the Sunday, July 13, issue of the Wisconsin State Journal was removed Wednesday from the Madison daily newspaper’s website before being replaced by a “re-reported” story Thursday afternoon.

An editor’s note on the re-reported story states that the original “contained incorrect information and quotes that were created by an unauthorized use of AI, which does not adhere to the Wisconsin State Journal’s editorial or ethical standards.”

The story topic itself, both in what I can find of the original and the reboot, is a simple, boring tick-tock story about a development project, in which the material for the proposals are all easily accessible. This wasn’t like the reporter needed whatever the hell Tom Cruise is using in the latest “Mission Impossible” movie to create a miracle out of thin air.

It’s not quite clear what’s more terrifying: That a reporter decided to cut a corner on something this vanilla and didn’t bother to make sure on at least a few basic facts or that the editorial process didn’t catch something that was so wrong:

One section about “The Grove,” a proposal from Neutral, a Madison real estate development firm, outlined plans for a “food hall prioritizing minority-owned vendors” and a “community advisory board” that would shape the development’s public life. Neither of those features, nor the name “The Grove,” appear in Neutral’s proposal linked from a city press release listing the firms that responded to its RFP.

“It’s all wrong,” says Daniel Glaessl, Neutral’s chief product officer. No reporter had contacted the firm about the project before Isthmus reached out on Thursday afternoon, he says.

The Synapse people in the story above are like, “Hey, AI will write all your stuff, but don’t worry! The editor will be there to ‘create enough friction’ to prevent anything terrible from getting into the public sphere.” I’m having even bigger doubts about that concept now, especially since I know the folks involved at the WSJ and I have always respected and admired them. It always hurts a little more when it happens where you live.

And finally…

FOUL BALLS: It’s not a stretch to say that the sports memorabilia world is a multi-billion-dollar industry, in which athletes get paid exorbitant amounts of money for scrawling their names across all manner of items. What makes an athlete’s autograph worth the big bucks is a confluence of the awesomeness of the athlete and the rarity of their willingness to sign items. The more people want an athlete’s signature and the fewer of them exist, the more likely there will be fraud involved at some level.

In the 1990s, the FBI dug into the world of fraudulent autographs with “Operation Bullpen,” a multi-year investigation that took down a series of forgers who made a living faking the signatures of Joe DiMaggio, Mickey Mantle, Mark McGwire and others. In response, the major sports leagues and the preeminent autograph certification houses began using holograms, certificates, registration numbers and QR codes to assure buyers that the autographs they owned were, in fact, real.

It didn’t work as well as they had hoped:

Brett Lemieux, a 45-year-old resident of Westfield, Indiana, was the founder of sports memorabilia site Mister ManCave, which claimed to have sold millions of counterfeit items with net profits exceeding $350 million, and had “the largest framed jersey inventory on the web.”

Lemieux made the claims in a now-deleted Facebook post on the “Autographs 101” group Wednesday, saying the money was “too good” to pass up and that he wanted to stop the fraud, which had been going on for the better part of two decades.

Part of Lemieux’s post, bragging about his fraudulent creation of autographs and the holograms of multiple companies meant to prevent such fraud.

Over the past week, I’ve seen a ton of people submitting their Shohei Otani, Mike Trout, Derek Jeter and other autographs to online authentication groups, only to find that the balls, bats and photos are fakes. Many of these items have one, if  not more, authentic-looking holograms, to boot.

I know this seems far afield for journalists, but it really emphasizes an important point about how and why fraud persists in our space as much as it does everywhere else: If there’s money to be made in an easy way, people without scruples are going to take advantage of the situation.

That means we have to all be extra cautious about what we are willing to accept at face value and what we are willing to walk away from if the situation doesn’t feel right. It’s especially true when we really want something to work out, like getting that crucial source or making that tight deadline.

I’ve often said that paranoia is my best friend. Feel free to make it yours as well.

 

 

 

 

 

 

Ask a generic question, get a generic answer: The perils and promise of letting ChatGPT build your interview questions

Believe it or not, we actually used to sell beer like this. Try not to let AI make your interview questions as generic as this marketing approach.

As I noted in previous posts, I wasn’t significantly worried about ChatGPT or other AI programs taking over journalism at this point. The writing wasn’t clean, the material we cover is too specific and the nuances of the language tended to create more problems than solutions.

That changed a bit when a former student told me people in her orbit were using ChatGPT to create interview questions for everything from press releases to profiles. My concerns ranged from students becoming lazy to ChatGPT asking some pretty offensive questions that ill-prepared or inexperienced writers would use to their own detriment.

To see what would happen, I relied on a situation a friend found herself in last week while teaching at Missouri Boys and Girls State. One of the guest speakers was Barbara P. Bush, daughter of former President George W. Bush and granddaughter of former President George H. W. Bush. She has also been a significant figure as an author and global health care activist.

My friend noted that one of her students had an exceptionally good interview moment with Barbara P. Bush, which we’ll get to in a moment. Before that happens, I decided to let ChatGPT take the wheel and build me a set of interview questions for the former First Daughter:

Here are five thoughtful and engaging questions an interviewer could ask Barbara Pierce Bush, daughter of former President George W. Bush:

  1. Your work with Global Health Corps has been widely recognized—what inspired you to focus on global health, and how has that mission evolved over the years?
    (This connects her personal passion to her professional work.)

  2. Growing up in one of America’s most prominent political families, how did you navigate forming your own identity outside of politics?
    (This explores the personal challenge of living in the public eye.)

  3. You and your twin sister Jenna have often shared stories of your childhood—how has your bond influenced your adult life and careers?
    (This taps into her well-known relationship with her sister and could elicit warm, relatable stories.)

  4. What lessons did you learn from your mother, Laura Bush, that have shaped how you approach leadership and service?
    (This allows reflection on legacy and values from a female role model.)

  5. In today’s divisive political climate, how do you think members of prominent political families can help encourage civility and understanding?
    (This invites her to speak thoughtfully about public discourse without diving into partisanship.)

It’s a solid list, but here are a couple key problems I noticed:

  1. Bush has been asked and has answered most of these questions at length before. This means she probably has a pretty standard answer for them, so you’re not getting anything new.
  2. Not one of them directly attaches itself to the idea of your dad being the “most powerful man in the world.” (I always believed my dad to be the “most powerful man in the world” back when I was a kid, as I’m sure a lot of kids my age did back in the day. That said, my dad didn’t have Seal Team 6 or the 82nd Airborne to deploy when my room got too messy for him to tolerate.)

The question my friend’s student asked perfectly dealt with those two problems:

“How did your dad change after Sept. 11?”

Bush teared up immediately, not expecting a question like that, nor one phrased exactly that way. She gave a great answer about how he changed but he would always be her dad. She also thanked the student for asking that question and really tapping into something different.

Even in that scenario, it’s hard to pick on ChatGPT’s effort. These are good questions, and the bot does a good job of rationalizing what each question should provide the interviewer. I could clearly imagine being flustered if I were to interview someone famous like a former First Daughter, who has carved out an impressive career of her own. ChatGPT’s questions would put me on some better ground than, “So what’s it like to be a twin?” or “Was it weird having your high school dates pick you up at the governor’s mansion?”

That said, ChatGPT struggles a bit more when it comes to people who aren’t internationally famous. I picked a Facebook friend at random and tried the same thing, ending up with one of my favorite high school teachers, Greg Frederick, as the guinea pig for this situation (Sorry in advance, Mr. Frederick…):

Here are five thoughtful and respectful questions an interviewer could ask Greg Frederick, former teacher at Pius XI High School in Milwaukee, Wisconsin:

  1. What initially inspired you to pursue a career in education, and what led you to Pius XI High School specifically?

  2. How did your teaching philosophy evolve during your time at Pius XI, and what impact did that have on your students?

  3. Can you share a memorable moment or experience from your teaching career that still resonates with you today?

  4. How did you approach balancing academic rigor with the values and mission of a Catholic high school like Pius XI?

  5. What advice would you give to new educators entering the profession, especially those teaching in faith-based schools?

A couple of these are solid “opener” questions that could get the ball rolling for a decent interview. Why you wanted to be a teacher and why you ended up teaching at Pius for as long as you did could establish report. The third question is always a good one, in allowing the person to open up about something important and not forcing the interview subject to feel pigeonholed.

A couple of these feel problematic: The second question is a bit iffy in that we assume he changed over time (not always a guarantee) and that it impacted his students (again, not a guarantee). The fourth question could be taken poorly, as someone might read it as the values and mission of the faith are at odds with educational rigor.

The bigger thing, again, is that they’re generic. These could be asked of any biped who spent an appreciable length of time teaching any subject at any religious-based high school. In short, there’s less out there on Mr. Frederick because he’s not a president, a pope or a Kardashian.

I don’t know him as well as a lot of other people, I’m sure, in that I was a student at the school for four years during a specific time period, and I haven’t kept up with his life like a stalker in a made-for-TV movie. That said, I do know certain things that might have led to different questions:

  • He taught English at the school, including upper-level courses like honors English and a course in science fiction and fantasy literature. Sci fi seems to predict the future in a lot of ways, so I’d love to ask a question relating his work in that area with what we are seeing now (social media, phones glued to our souls etc.). I’d also like to know what thoughts he would have on how best to inspire his best writing kids to do good work these days, given AI’s rampant use in generating essays.
  • His wife was also a teacher at the school for a time, although not in the same department. A question about what it’s like working in the same place as a spouse, including benefits and drawbacks, would be interesting to me and probably a lot of people who face similar issues. I know how weird it was when Amy was working for the MU police department and I was running the crime beat for the local newspaper. I wonder how people who actually work closer than that find themselves coping with it.
  • His kids went to that school, including his son, Chris, who was in my grade. This opens up a ton of real estate for questions, particularly for me as I was also a teacher’s kid. After kindergarten, my mom and I were not at the same school ever. I’d be interested in finding out what discussions led to the Frederick kids going to school where mom and dad worked. I wonder if they’d do it the same way again, based on how it all worked out for the kids and the parents.

(SIDE NOTE: Chris and I participated in a number of musicals together, which speaks more to the desperation the directors felt in getting males into the productions than it did to our musical acumen. In preparing for one such musical, the choral director actually told both of us to just “mouth the words” during the songs as we were not only godawful at singing, but we were screwing up everyone else who could carry a tune. Good memories…)

  • I’d be interested on his take on things like homeschooling, which really beefed up its presence over the past few decades, in terms of teaching your own kids or being able to have other people teach them. I’m sure I could blather on forever here, but let’s move on…
  • It never hurts to ask the “So what are you doing now that you retired?” question, in that it opens the door to something people who learned from Mr. Frederick or taught with him probably want to know.

Again, this is just based on some basic stuff that I know that ChatGPT doesn’t. If I were to end up doing a profile on Mr. Frederick, I’d take my normal “build the donut around the hole” approach and interview his family, some people I know who taught with him, some of his students who have gone on to work in specific fields and so forth. The goal would be to figure out what kinds of questions to ask that are more on point about him specifically as opposed to Generic Catholic High School Teacher X.

This is why it’s important to use ChatGPT as a tool in your toolbox and not view it as a panacea for completing your work. If you don’t go beyond the basics that ChatGPT can offer, you will never create anything better than a basic piece.