A look at the impact of Artificial Intelligence on journalism and education now, and where it might lead in the future

(EDITOR’S NOTE: Today we’ll kick off the start of the academic year schedule with our “Mass Com Monday” post, geared toward a broader discussion for those folks doing intro classes or those looking for bigger topics to examine. I am apparently at the last university that still has yet to start classes, but since you all are going to work, we go to work on the blog.

If you like this content, style or approach, let me know. If not, let me know that, too, as this is a transition in progress for the blog.– VFF.)


A BRIEF RECAP: Artificial intelligence is nothing new, but its more recent applications in education and journalism have brought the topic to the forefront over the past year or so, when OpenAI released its ChatGPT. The chat bot could craft reasonably decent written copy that could lay waste to the ways in which we once thought of writing as a humans-only skill.

An Atlantic article in December that the ChatGPT and its successors would eliminate one tried-and-true way in which professors tested knowledge and skills, noting succinctly, “The College Essay is Dead.” Others took the new program for a spin in various educational environments, where it did quite well. One writer had it test Harvard’s freshman curriculum, where ChatGPT received a 3.34 GPA. It also passed the bar exam, did well in business school, and even rattled the cages of med schools with its work.

Journalism has some concerns with the AI issue, in that the ability to abuse the English language has long been the sole territory of ink-stained wretches. The Associated Press established some relatively clear guidelines about what it will or won’t allow when it comes to AI, so that should be one more thing students dread popping up on an AP Style test in the future.

In addition, at least a few publications along the Gannett chain have been keeping up with their work with the help of AI:

These briefs have repetition problems, structural issues and generally speaking no real source material to speak of to support any statements of opinion. In other words, we’re looking at about a “B/B-” effort in most intro to sports writing classes. (An Axios report early today noted Gannett’s Columbus Dispatch would be “pausing” this sports program, given reader backlash. No word on if their statement about pausing the program was written by an AI program.)

Given the general freakout about all this, it looks like we’re about six months from this happening…

Or maybe not…

THREE KEY THINGS PEOPLE FORGET ABOUT AI:

  • IT OPERATES OFF OF WHATEVER IS AVAILABLE: The concept of “garbage in, garbage out” is usually credited to IBM programmer George Fuechsel, who coined the term in the 1960s. Simply put, the computer (or any logic-based system) will do what it’s trained to do with whatever input it receives. If the input is good, the output will be good. If the input is crap, the output will be crap. To this point, ChatGPT and other similar programs have been the beneficiaries of a wide array of high-quality content from a vast group of sources. That might not always be the case and even if it is, ChatGPT might not know the difference.
    One major concern raised here is that ChatGPT doesn’t really distinguish between the work of high-quality sources that have created tomes of knowledge and chuckleheads who run blogs. Another is that, as ChatGPT continues to generate more and more content, it becomes a self-feeding loop, like a snake eating its own tail.
    At the point of its launch, any and all material online was the company’s oyster, because nobody really realized what these folks were doing at the time or how they were doing it. Now that folks are digging in a bit deeper, those open lanes on the information superhighway are likely to become restricted, thanks to copyright issues and the folks who own those copyrights. This leads us to…

 

  • COPYRIGHT OWNERS TEND TO GET TESTY WHEN PEOPLE STEAL THEIR STUFF: The folks running ChatGPT are already getting their first taste of what the legal battle could look like regarding copyright infringement issues regarding the training and output associated with this program.
    In the simplest of terms, copyright basically says the person who created a work owns the ability to do with that work whatever they see fit. If someone else takes that work and does something with it that you don’t want them to, you can seek some sort of restitution. (Yes, I’m oversimplifying this, but it’s the first week of classes or so and law won’t hit you until mid-semester at the earliest…)
    Several authors have already sued the tech company over the use of their work to help build this thing, as has comedy pro/author Sarah Silverman. The bigger concerns are coming down the road, as a class-action suit in California states that the OpenAI’s data scrapers violated  “terms of service agreements and state and federal privacy and property laws.”  In addition, the New York Times has put a blocker on the ChatGPT webscraper and is “mulling” a lawsuit against the company. (As a good friend used to say, “It ain’t a lawsuit until it’s filed,” but when an organization as big and powerful as the Times publicly ponders something like this, it’s at least a shot across the bow for OpenAI.)
    If this kind of thing continues, it could substantially limit the effectiveness of AI programs like ChatGPT and potentially force OpenAI to start the process over from scratch.

 

  • CHATGPT IS ONLY AS GOOD AS OUR FAITH IN IT: If you want to see an amazing look at how simply “believing” in something can both rocket something to stardom and crash the hell out of it in a few short months, watch John Oliver’s look at cryptocurrency and then come back here.
    As much as the people building and playing with ChatGPT might not want to believe it, this system fits that same mold: We use it because it does something for us that we think is good, but the minute we figure out that it might not be all that and a bag of chips, our faith in this thing can crater rapidly.
    According to the Washington Post, the “neat new toy” vibe of this thing is already starting to wane. Additionally, the earlier look at what the Columbus Dispatch has done in pulling out of the AI writing gig demonstrates that we’re not on the road to SkyNet quite yet.

DISCUSS AWAY: Consider a few angles for potential discussion about discussion in class from these angles:

  • BASICS:
    • To what degree have you played around with GPT? What’s your early sense of what it can do and what it can’t?
    • How and why would you or wouldn’t you use ChatGPT?
  • HISTORY:
    • Look back at some of the other “early innovator” elements associated with our media (Napster, Friendster, AskJeeves etc.) and see how each of them either started a revolution or fizzled out. What kind of pattern do you see for ChatGPT based on these previous efforts?
  • LAW:
    • Do copyright issues concern you generally speaking and do you have concerns about them as they relate to the ChatGPT situation?
    • Is there a way to balance the rights of copyright owners with the interests related to developing software like ChatGPT?
    • If these suits eliminate significant sources of quality material from which ChatGPT can draw, how confident would you be in using this kind of program?
  • ETHICS:
    • Given what you’ve seen about how ChatGPT can write essays and even get you through a freshman year at Harvard, how do you feel this could impact your education or the education of others in your peer group?
    • Is it fair to use a program like ChatGPT to do some of your work? If so, what kind and how much?

Leave a Reply