
Meet Elizabeth Hubbell, a 25-year-old skin-care expert who is willing to be a great source for your next story on anything makeup or skin-care related. She’s actually completely fabricated. Her picture came from an AI generation site and her name is a combination of my car (Betsy) and a baseball player whose card I had laying around (Carl Hubbell). Careful. It’s dangerous out there…
When it comes to doing interviews, I always tell students they need to do them in person. In response, they often look at me like I’m asking them to use a teletype machine or some semaphore flags. It’s easier, faster and more convenient for both parties if they can do a text, a chat or an email interview, the students say.
I argue that the face-to-face interview allows for a deeper connection for profile and feature pieces. This approach also can prevent sources in news stories from weaseling out of answers they could otherwise work through via several drafts of an email. Plus, if I spend some time in the source’s environment, I can probably find a personal effect that could give us something to talk about, like a family photo, a kid’s drawing or a sports item. At the very least, it’ll help with scene setting.
Apparently, there’s another good reason for my approach these days: Your easy-to-access, extremely helpful, expert source might be AI:
Since the launch of ChatGPT in 2022, anyone can generate comment, on any subject, in an instant.
It is a technology that appears to have fuelled a rise in expert commentators who have appeared widely in national newspapers but who are either not real, not what they seem to be or at the very least have CVs which do not justify their wide exposure in major newsbrands.
The rise in dubious commentators has been fuelled by companies that charge the PR industry in order to share quotes via email with journalists who have submitted requests for comment.
Journalist Rob Waugh found that in a number of cases, digital outlets were mass-generating content from these supposed experts, giving everyone from news journalists to PR practitioners the exact the quote or information they needed on a wide array of topics. However, when challenged to engage more deeply regarding who they are or what they have done in life, the “sources” suddenly had difficulty:
She has been quoted in Fortune talking about “loud budgeting” and by Business.com talking about the best countries in which to obtain a business education (both sites are based in the US).
A profile on Academized describes her as a “biochemist and science educator”. The same byline picture also crops up on a publisher called Leaddev, for someone called Sara Sparrow. Rebecca Leigh has written for DrBicuspid.com about how to write a business plan for your dental practice where she is described as a writer for Management Essay and Lija Help (two online writing services).
When challenged via email to do something that would be difficult to do with AI image-generating software (send an image of herself with her hand in front of her face) or prove that she was an environment expert, Rebecca stopped communicating.
One AI source, “Barbara Santini,” was particularly prolific in the volume and array of topics she could cover for journalists. Waugh found this roster of publications that had included Santini quotes:
She has been quoted in The Guardian talking about the benefits of walking (paid content), in Newsweek talking about white lies, Marie Claire talking about the meaning of money, the Daily Mirror talking about the benefits of sleeping with your dog, in The Sun talking about sexual positions, Pop Sugar talking about astrology, and Mail Online talking about how often to change your pillow.
Despite her ability to be all knowing and wise, Santini apparently couldn’t receive phone calls, a relatively easy giveaway that the “person” on the other end is AI. Waugh also found other examples of journalists who were getting taken for a ride by an AI source, including one case where the non-human pitched a sob story about breast cancer survival:
“Seeing my scarred chest in the mirror was a constant reminder of what I had lost,” Kimberly Shaw, 30, told me in an emotional email.
She had contacted me through Help a Reporter Out, a service used by journalists to find sources. I cover skincare and had been using the site to find people for a story about concealing acne scars with tattoos.
<SNIP>
Shaw’s experience may not have been relevant to my acne story, but it tapped into the same feelings of empowerment and control I wanted to explore. Thinking she could inspire a powerful new piece, I emailed her back.
But after days of back-and-forth conversations, something in Shaw’s emails began to feel a little off. After idly wondering to my boyfriend whether she could be a fake, he suggested that I run the emails through a text checker for artificial intelligence.
The result was unequivocal: Shaw’s emails had been machine-generated. I’d been interviewing an AI the entire time.
As a result of Waugh’s story, a number of these information clearinghouses have tried to cull their ranks of AI “experts” while the deceived publications have retooled or removed the stories with fake people in them. Although the founder of one of these “expert mills” blamed much of the situation on “lazy journalists,” he kind of gave up the game a bit when it came to explaining why these platforms don’t prevent the frauds from gaining access in the first place:
Darryl Willcox, who founded ResponseSource in 1997 and sold it in 2018, says that the simplicity and speed of platforms like ResponseSource is key to their appeal and that attempts to add authentication risk slowing down the system.
Willcox said: “The other factor which complicates things a little bit is that these platforms are quite an open system. Once a journalist makes a request they can be forwarded around organisations, and sometimes between them, and often PR agencies are acting for multiple parties, and they will be forwarded onto their many clients.”
In other words, “If we slowed down to make sure things were accurate, we wouldn’t be as appealing as we want to be.” Eeesh.
So what can you to to avoid quoting a fake person? The overarching theme is basically, “Don’t be a lazy journalist,” but here are a few more specific tips:
TRUST, BUT VERIFY: The old Russian proverb really comes into play here and for good reason. I often say that paranoia is my best friend and has kept me out of a ton of problems. To that larger point, not only did I click on every link I could find in Waugh’s story, I also Googled the hell out of Waugh himself. Why? I imagined that it would be the most epic “Punk’d” moment on Earth if the media world was flocking to this story about AI screwing with journalists, only to find out that Rob Waugh was also an AI fake. I found LinkedIn, X, Bluesky, media staff pages and at least a dozen photos. I wouldn’t bet the house on the fact he’s real, but I’d probably bet the lawn tractor.
This can be harder in situations like the one involving the cancer scammer, as regular people tend not to have as big of a social media presence or digital footprint. That said, even regular people under the age of retirement should have left a few breadcrumbs out there for you to find.
KICK THE TIRES: If you can’t find the person clearly through a digital search, feel free to play a little game of 20 Questions to see if you can get some things ironed out. Experts who have kicked the tires on a few bots can offer you specific ways to ask questions that will tend to ferret out fakers. The author in the cancer-scam story revealed that asking for specific photos based on prior conversations can be helpful as well.
I learned about this kind of thing in trying to defeat scams when it came to buying sports memorabilia. When unknown sellers offered either exactly what I wanted when I couldn’t find it anywhere else or provided me with a ridiculously low price for something I knew should cost more, the pros who had been around the block a few times suggested I ask the seller to “coin the image.”
What this meant was that I wanted the person to take a picture of the item with a coin (usually asking for either heads or tails, or maybe even a specific coin) so I could tell they had the item and weren’t messing with me. Turned out, that advice helped me dodge a bullet or two. As weird as it might seem, asking someone to take a picture with their left hand raised or holding a quarter with “heads” showing might help you avoid a problem.
MEET IN PERSON: Again, this is the most obvious one to suggest. If you meet a person, in person, it’s a pretty safe bet that you can consider them real. The rest of the stuff (Are they the expert they claim to be? Did they really do what they say they did? Do they actually have cancer?) remains a risk without substantial additional reporting, but at least you’ll know they exist.
If that can’t happen for legitimate reasons (the person lives too far away etc.), look for other ways to get some human connection with the source. That could be a Zoom/Teams/Whatever video chat or an actual phone call at an actual phone number. In the cases where the frauds proliferated, it was pretty clear that the only connection between the source and the journalist was through a keyboard. That’s especially dangerous when you don’t have a prior relationship with a source.
WHEN IN DOUBT, DO WITHOUT: At the end of the day, there is no journalistic rule that says you have to use a source, a quote or a “fact” just because you have it. If you don’t feel comfortable with how a source is providing you with information or you aren’t 100% sure this person is a person, it’s better to leave that source out of your story than it is to run the risk of getting bamboozled.
If you say, “Well, the whole story will fall apart without this one source and I can’t get anyone else to provide me with this information,” maybe that’s more revealing than anything else we’ve said here.