Public Media for Central Pennsylvania
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The Ultimate Noise: AI and News Pollution

News Over Noise episode 210 title graphic

AI has been in the news a lot lately. But what happens when AI starts making the news? Generative AI–the type of artificial intelligence that pulls from existing data to create new content–presents a significant challenge to journalism. It can enable misinformation to spread like wildfire. How can the average consumer tell what’s real and what’s not? On this episode of News Over Noise, hosts Leah Dajches and Matt Jordan find out by talking with Jack Brewster from NewsGuard, an organization that provides transparent tools to counter misinformation.

About the Guest:

Jack Brewster is Enterprise Editor forNewsGuard. Prior to working atNewsGuard, Brewster was a Fulbright scholar in Munich, Germany, conducting a research project about the role of journalists in the digital age. Previously, Brewster was a reporter at Forbes Magazine, covering politics, misinformation, and extremism. Brewster also has written about politics for Time Magazine, Newsweek, Vice News, and the New York Daily News.

Episode Transcript:

Leah Dajches: On November 6, 2023, an article appeared on a website called Global Village Space. It claimed that, quote, "A renowned Israeli psychiatrist who was celebrated for his work in curing severe mental illnesses was discovered dead in his Tel Aviv home." The article also stated that the psychiatrist left behind a, quote, "devastating suicide note that implicated Netanyahu." This claim quickly spread on several platforms in numerous languages, and it was amplified by thousands of social media users. It was investigated by NewsGuard, an organization that provides tools to counter misinformation for readers, brands, and democracies. NewsGuard, found that the article was generated using AI. Despite the fact that the psychiatrist appears to have been fictitious, the claim was featured on an Iranian TV show and it was recirculated on media sites in Arabic, English, and Indonesian, and spread by users on TikTok, Reddit, and Instagram. This is not an isolated incident. To date, NewsGuard has identified 750 unreliable AI-generated news and information websites spanning 15 languages.
This is what's known as generative AI. This is a type of artificial intelligence that pulls from existing data to create new content. And it presents a significant challenge to journalism. How can the average consumer tell what's real and what's not?

Matt Jordan: We're going to find out by talking with Jack Brewster, Enterprise Editor for NewsGuard, an organization that provides transparent tools to counter misinformation. Prior to working at NewsGuard, Jack was a Fulbright scholar in Munich, Germany conducting a research project about the role of journalism in the digital age. He was a reporter for Forbes magazine, covering politics, misinformation, and extremism. Jack also has written about politics for Time magazine, Newsweek, Vice News, and the New York Daily News. Jack, welcome to News Over Noise.

Jack Brewster: Thank you for having me. I'm glad to be here to talk about a very important topic.

Leah Dajches: Yeah. We were excited to have you on. Actually, I was looking at the NewsGuard website earlier, and there seems to be so many various tools and data points, research, articles. There's a lot available. Could we maybe just start by, what is NewsGuard, and how does it work?

Jack Brewster: So NewsGuard is a misinformation watchdog that uses human journalists to rate the reliability of websites, podcasts, and TV shows. And in addition to that, we also track false narratives and how they're spreading across various platforms and sites. And we put out periodic special reports as well as some newsletters like a consumer-focused newsletter called Reality Check, which covers the world of misinformation.

Matt Jordan: You used to be on a different beat. What does Reality Check do as a newsletter? What are you-- it's consumer-forward. Is it like a consumer protection idea behind it, or what's the--

Jack Brewster: No, no. It's meant for anyone and everyone who cares about our media ecosystem and the current threats that are facing it. So, this is not a fact checking newsletter. It really covers the way that misinformation spreads and how it's impacting the pipelines that we use to get our information. So, it's for anyone and everyone who cares about our democracy, who cares about media, who cares about the way that technology is changing the way that we get our news.

Leah Dajches: Matt, I actually think you sent me the February version of Reality Check, the newsletter, because there was a prominent focus on misinformation about Taylor Swift. And so, as a media researcher and a Swiftie, I was excited to see this. We know a lot of young people are looking to influencers and celebrities for their news information, you know, among other things. And so, with this in mind, I was wondering, how does NewsGuard conceptualize or define news? Or is it kind of a broad, wide cast net in thinking about our media landscape and what is news?

Jack Brewster: I would say-- I would go back to saying, we are unique in the sense that we cover the way that misinformation is spreading. Now, that sounds like a very broad topic, but I don't really know of any other news outlet that actually only covers that. There are a lot of fact checking news outlets out there, and we do a ton of fact checking. I don't want to take away from that. But we not only debunk false narratives, but we track the way that they're spreading online. We have a database of over 8,000 websites that we have all written these long research memos on. We call them nutrition labels because we give them scores. And because of that, we have this massive database that we can draw upon when we're writing about the latest Russian disinformation narrative or a false narrative about Taylor Swift.
We can see very easily, you know, who are these people that are spreading this false claim? And I think that's a very important thing for consumers to know. I mean, a basic function of the internet should be that people have information about the sources where they're getting their news. And on social media, they absolutely do not. And so, we're dedicated to trying to provide more information. That's our founding mission.

Matt Jordan: How have you found other news organizations responding to your work? So, you're putting out things like trust indicators and recommending their usage and search engines. How do you other news organizations embracing or keeping your work at arm's bay? How would you say that works?

Jack Brewster: Yeah. I mean, for anyone who cares about having more information about where they're getting their news or about our media ecosystem in general, I think the reception has been positive. I laugh sometimes when people say that we're censors because if anything, we're adding to speech, not taking away. We don't tell people what to do with our ratings. We put it out there and we say, look, we're the only news outlet out there right now that is providing this information. So, I think anyone who recognizes that simple but for some reason hard to understand fact, their reception's very positive. Even for some news organizations that get low scores in the beginning, when we started about-- what is it? Six years ago-- have worked with us to get higher scores. We are not a journalism review outlet. We want news organizations to get higher scores because our rating system is not meant to be overly strict and punitive. We're trying to provide just the basic functions of what a news outlet should offer in terms of information and say to consumers, look. Does this news outlet provide that? Do they tell you who the owner is? Do they say who their content creators are? Do they frequently publish false claims like the 2020 election was stolen and the COVID vaccine is a bioweapon? These sound really like a low bar, and that's because it is. But you'd be surprised at how many outlets don't tell people about their ownership or don't list content creators or spread false claims about the election.

Matt Jordan: How about search engines? Have search engines-- I know that you have worked some with Bing, the search engine Bing. But how has Google responded to your enticement of having trust indicators in terms of the things that pop up on somebody's search?

Jack Brewster: So, I'm going to have to deflect that question a little bit just because I'm not on the business side. I know right now we don't have-- we're not on Google. I would love for us to be. I will say that. I want our ratings to be everywhere. I want it to be on TikTok, Instagram, Facebook, every social media platform or search engine. I mean, I think it's truly a no-brainer. The fact that a everyday consumer has to fight and claw to find out information about a news website, like, it's very-- it's possible right now for someone to go on Facebook and they could be fed a Russian-sponsored disinformation site and they would have no idea, on an American social media platform. And that is just crazy to me. It doesn't make any sense. So, I would love for our ratings to be everywhere. It makes sense to be on search engines and every social media platform. But right now, we're not on Google, no.

Leah Dajches: So, from my understanding, it seems like NewsGuard is really a useful kind of tool, outlet to help users, media, consumers check false narratives, misinformation, disinformation after a news story has been released. But is there a component or something within NewsGuard that can help journalists as they're creating the story, kind of at the front end of production?

Jack Brewster: So, I'll actually push back on that. I think we're more, if anything, what we would call a pre-bunking outlet. And everyone knows-- your listeners will know what debunking is, which is what you just said, countering a false narrative after it's been published. But we engage in pre-bunking in the sense that we have this database of scores and information about news outlets. So, we're able to almost give people information so they are prepared to debunk things on their own when they come up.
So, if I know-- for instance, if I know-- if I'm on Twitter and-- or X, sorry-- and I know that DC Weekly is a Russian disinformation source, my antennas are going to be up when they say that the US has biolabs in Ukraine. I'm just making this up. So, we, I believe, engage in pre-bunking. We help consumers and journalists before they encounter misinformation by giving information about the sources. If there's detailed information about it and you know that a source has published false claims previously, hopefully that news consumer, that journalist, or whatever is going to be more conscious about things that news outlet or social media account will produce in the future.

Matt Jordan: So, one of the things you-- and this might-- the pre-bunking idea, is this Misinformation Fingerprint catalog that you have. So, what are some of the narratives out there, some of the false narratives out there that are, let's say, your top three out there right now that are starting to surface more and more, especially in, say, relation to Ukraine or through the US elections.

Jack Brewster: Ooh, off the top of my head-- I mean, recently there's been a surge in AI-related false narratives surrounding the war and elections. The big ones recently have been around that. Fake images, doctored images about the war purporting to show travesty in Gaza even, which is kind of strange because it's already an absolute travesty without any doctored images. I mean, the images coming out of there-- the real images are just totally gruesome. But still, we're still seeing a lot of those.
There's a lot of COVID vaccine-related myths still popping up, that it's changing your DNA, that it is somehow, you know, tainting your testosterone, those kinds of false claims. That's, again, still a huge dominant theme. And we're starting to see some 2020 election voter fraud claims come back as well with the elections coming up. But those are the major narratives. With Ukraine, I'm always shocked by all of the things that they throw at Zelenskyy. I mean, it's like the kitchen sink at this point. I mean, they have said that he's gay, that he is a Nazi, that he has villas all across the world, that he buys yachts, that-- that is, everything under the sun that you can think of, they have thrown at him. And by they, I mean the Russians, but also other media organizations, the state-run media organizations. So, I'm continually shocked at the Zelenskyy false narratives, and we have just continued to see those just two years on from the war.

Matt Jordan: So, you're describing these narratives and maybe their origins. So how does AI impact this now? How is artificial intelligence impacting the spread or amplification of these types of false narratives?

Jack Brewster: Yeah. So, I'd like to say that AI has democratized the troll farm. And what I mean by that is that just as the internet democratized journalism and access to information, AI has given the person in his or her basement the capacity to have hundreds, if not thousands of journalists, videographers, audio technicians at the push of a button. And because of that, AI has thus democratized the troll farm. Anyone and anyone can start a troll farm with the push of a button. So, think about 2016 when the IRA in Russia, the Internet Research Agency, pumped out content on Facebook to try and sway the 2016 election. They had human journalists doing that. I forget off the top of my head how many they hired, but it was a good number. That staff could produce the same type of content, same quality, if not better, at 10 times, 100 times the scale if they used AI correctly. They could set up a script that combed the internet for certain keywords, ran it through a language model, and pumped out hundreds, if not thousands of articles a day without any human oversight whatsoever. And that is a scary potential, and that's what AI has done across the board. So that's one thing. The second point I like to talk about is that the flip side of all of this, of AI-generated images and video and audio, is that consumers, likely for the better, are more skeptical about the things that they see online. But that means that now we're seeing false narratives emerge about real images where people can say-- people are saying, that is AI-generated when it's actually not. And that is another very dangerous part of the effects that AI has had on our media ecosystem. I'll give an example. At the beginning of the Israel-Hamas War, there was an image that Benjamin Netanyahu and Ben Shapiro The Daily Wire host circulated of a dead baby corpse. And somebody online swapped out the baby corpse for a dog and said, look. Israel is doctoring images using AI, and they're trying to pass off this dog as a baby. And this got millions of interactions across social media. So, there were tons of people out there that likely believed that Benjamin Netanyahu and this American right-wing media host had tried to pass off a photo of a dead baby corpse as-- you know, by using AI. And so that shows you the effect that AI has had on our media ecosystem. It's so polluted already that we have people out there who are claiming that real photos are fake.

Leah Dajches: If you're just joining us, this is News Over Noise. I'm Leah Dajches.

Matt Jordan: And I'm Matt Jordan.

Leah Dajches: We're talking with Jack Brewster, Enterprise Editor for NewsGuard about the challenges generative AI presents to journalism. You know, I'm glad we're talking about AI because I was looking kind of at NewsGuard and thinking about how these various tools work. Jack, correct me if I'm wrong. Don't y'all use AI algorithms to help assist with the analysts? So, kind of using AI for good to help detect AI?

Jack Brewster: Yeah. I mean, look, AI-- it's a broad topic, but can be absolutely be used for good, and I welcome it across the board. I think ChatGPT is an unbelievable technology. I think other AI technologies in health and other areas will do tremendous good for the world. NewsGuard is not anti-AI. We're anti-AI if there's no transparency. We're anti-AI if it's being weaponized by bad actors to spread misinformation at large. But we're not anti-AI. And I do think that AI can be used to help counter misinformation online in various ways. There's other things to talk about when it comes to journalism ethics about producing content using AI. That's a different conversation, and we're not engaged in that currently. But we're absolutely not anti-artificial intelligence.

Matt Jordan: So, some of the things you're describing in terms of how misinformation spreads and how it starts to create this doubt and skepticism or cynicism about reliability of anything, which is kind part of the Russian cyber-attack really is this firehose of falsehood where nobody knows what's true anymore, and that's the point. But you know, NewsGuard's done a lot of reports on this one indicating that ChatGPT 3.5 generated misinformation 80% of the time and that 4.0 did about 100% of the time. So, we know the tools are there, and they can be used for the bad. But what is it about the incentives of the media system that make it so powerful? And I'm thinking in particular about unreliable artificial intelligence-generated news as a kind of category. How does that just in a way reflect the incentives of the news ecosystem?

Jack Brewster: The news ecosystem, to answer your question succinctly, would be, follow the money. And where is the money coming from? It's often programmatic advertising on social media and through search engines. So, you mentioned AI content farms. So why are AI content farms popping up? There are two reasons why someone would start such a thing. One, to try and sway public opinion, promote a cause. That's what Russia does when they pump English language news articles into American news feeds. The other is to make money. And if I can set up an AI content farm to run a script to generate hundreds of-- or thousands of articles a day and I get enough clicks on them, that's a passive income investment. That's the same as, you know, throwing money in the stock market. You're getting cash without doing anything. So, I think that that's a main driver behind a lot of these sites. Now, your listeners are going to be like, OK. Then, well, if it's not trying to convince me of misinformation, why does NewsGuard care, and why should I care? And the answer is we all should care about our newsfeeds and the places that we're getting our news from. If it's being polluted by AI crap, that's a problem, and we're all going to suffer that way. And you could easily imagine a world-- there have been reports that this is already happening where your Google search results, when you type in 2024 presidential election or a question about politics or the Israel-Hamas War or Russia-Ukraine War, you're being fed an article from some chatbot. I mean, do you want that? And if that's happening, then who is-- the revenue's going to that AI content farm and not to the journalist who actually had to do the reporting to write the article. That's a huge problem. So, the incentive often behind these AI content farms and AI-generated content farms on social media in general is typically to generate ad revenue. And they'll sprinkle in misinformation along the way because it sells.

Matt Jordan: So, what do you think the-- I know that-- you can defer this if you want because I know you're trying to get a relationship with Google. But Google, according to your studies, accounts for 90% of that ad revenue. I think the fact that $2.6 billion in advertising revenue and for our listeners programmatic ads are just AI-placed ads that kind of pop up when somebody does a search-- often, the brands aren't even aware that their ad is popping up on an AI site, and that's part of the problem. But what do you think the relationship of Google should be to this? Because essentially, they're working as a shill or as a-- I don't-- an enabler of fraud. You know, what do you think the best way to get a handle of this is?

Jack Brewster: Yeah, not because of any business [inaudible] I don't care about that. But just in terms of journalism ethics, I'm not going to say what I think Google should do. I mean, what I will say is that report after report after report from us has found that Google is indirectly financing the proliferation of misinformation superspreading sites, full stop. So, you know, do with that what you will. The companies themselves that are advertising through Google often do not know that their ads are appearing on Russian disinformation sites or COVID misinformation sites. And Google is-- all these reports show that Google is not really doing that much to try and stop that. And you know, they often will come back and say, well, we have keyword blockers. And companies will say, well, we have keyword blockers, and we use various services to stop us from advertising on those sites. And the problem is that these algorithms that they're talking about often miss blatant misinformation because keywords is not really how misinformation spreads. And people are smart, you know? And misinformation superspreaders are smart. I mean, they're in the business of information warfare, so it's pretty easy to trip up an algorithm.

Leah Dajches: You know, perhaps-- I realize I should have asked this question earlier, but Jack, we've heard you use the phrase false narratives. And for our listeners out there, taking this opportunity to help them with their media literacy skills, can you walk us through how false narratives fits into these broader understandings of disinformation and misinformation?

Jack Brewster: Sure. So, I'll start with a false narrative. A false narrative is any myth or myth variation. So, a false narrative could be that the COVID vaccine is a bioweapon, or it could be that the 2020 election was stolen. It's often a big sort of umbrella term, and then there are variations that are beneath that. So, a false narrative can encompass several things under one umbrella. Misinformation is the same thing. It's a false claim. Disinformation is a little bit different. It's when often a state-run media source or a bad actor working on behalf of a government attempts to spread misinformation to promote a certain cause. So, think what I was describing earlier about Russia's meddling in the 2016 election. That's disinformation. It's different than misinformation, though it falls under the same umbrella. So those are the big three that you just mentioned. There's also-- misleading narratives is an important distinction, right? There's a difference between misleading and false, and that's a huge differentiator that we constantly argue and debate at NewsGuard, whether something is false or if it's just misleading. And that can be a really, really important distinction, right? So just today, we were talking about how some left-leaning outlets framed Trump's bloodbath comments when he at his speech talked about how the-- I forget what the direct quote was. Something like the economy is going to fall into a bloodbath. He was trying to say that, but some news outlets, especially left-leaning ones, took that as meaning that he was calling for a bloodbath or that he was predicting a bloodbath if Biden was elected-- was reelected. And we were discussing, is that false-- is it false to say that he called for a bloodbath? And we went back and forth and decided it was more misleading. So that is the argument that we go back and forth on constantly at NewsGuard, and that's an important distinction for your listeners to know.

Leah Dajches: For news consumers who are already overwhelmed by journalistic practices and the structures of the news, what can they do about this? Is there kind of a first step or something on NewsGuard that we can at least point people to as a helpful tool to help them really manage this AI world and the spread of misinformation, disinformation, and kind of all of that?

Jack Brewster: Well, first I would encourage them to download our browser extension. That means our ratings will pop up on search engines as they're searching. And our ratings will show up on the side of Google search results and on social media as you're browsing the internet. That's a great starting point. Other things that NewsGuard-- they should follow Reality Check, subscribe to Reality Check. It's free right now. It is on Substack and available to anyone and everyone. We publish three times a week. There's commentary on Wednesdays that delves into the topics that we're talking about right here. And on Mondays and Fridays, we have stories about the spread of misinformation online. And then follow us on social media, and you will hear media literacy tips and other behind-the-scenes looks on how we cover the spread of misinformation.

Matt Jordan: Well, Jack, I want to thank you for coming in and making-- kind of unpacking AI and making it fun and explicit for us. And we wish you the best of luck with the new app.

Jack Brewster: Thank you so much. It was a pleasure coming on here. Thank you.

Leah Dajches: I feel like I learned so much talking with Jack. There really was a lot to unpack in terms of thinking about AI, but then even taking a step back and really refreshing misinformation, disinformation. But what I think is really cool for our listeners is how useful NewsGuard can be as a tool in our news literacy, media literacy toolboxes, right? Matt, what were some of your takeaways?

Matt Jordan: It's interesting how AI intensifies and amplifies tendencies in the media ecosystem that are already there, exploiting weaknesses just at a scale and at a speed that we haven't seen before. But in a way, what we're seeing is a lot of the same ways that misinformation proffers and profits on the internet.

Leah Dajches: That's it for this episode of News Over Noise. Our guest was Jack Brewster, Enterprise Editor for NewsGuard. To learn more and to hear an extended version of this interview with additional content, download the podcast at wherever you subscribe to podcasts or at newsovernoise.org. I'm Leah Dajches.

Matt Jordan: And I'm Matt Jordan.

Leah Dajches: Until next time, stay well and well-informed.

Matt Jordan: News Over Noise is produced by the Penn State Donald P. Bellisario College of Communications and WPSU. This program has been funded by the Office of the Executive Vice President and Provost at Penn State and is part of the Penn State News Literacy Initiative.

[END OF TRANSCRIPT]

Episode Credits:

Producer: Lindsey Whissel Fenton

Audio Engineers: Mickey Klein, Scott Gros, Clint Yoder

News Over Noise is a co-production of WPSU and Penn State’s Bellisario College of Communications. This program has been funded by the office of the Executive Vice President and Provost at Penn State and is part of the Penn State News Literacy Initiative.

Tags
Lindsey Whissel Fenton, MEd, CT, is an Emmy Award-winning filmmaker, international speaker, and grief educator.
Matt Jordan is head of the Department of Film Production and Media Studies in the Donald P. Bellisario College of Communications at Penn State University, and Director of the News Literacy Initiative.
Leah Dajches, PhD, is a postdoctoral scholar at Pennsylvania State University working on the News Literacy Initiative.