Public Media for Central Pennsylvania
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The Road More Traveled: How Misinformation Spreads

News Over Noise episode 403 title graphic

Misinformation now moves at the speed of algorithms and with generative AI, it is getting harder to tell what is real and what is manufactured. In this episode of News Over Noise, hosts Matt Jordan and Cory Barker talk with Sofia Rubinson, analyst at NewsGuard and senior editor of Reality Check, about how false claims spread, why AI is accelerating their reach, and what that means for public trust. From viral images and foreign disinformation campaigns to health hoaxes and AI-generated content, Rubinson breaks down how false stories move from fringe platforms into the mainstream and how NewsGuard tracks, debunks, and analyzes those narratives in real time.

About the Guest:

Sofia Rubinson is an analyst at NewsGuard and the senior editor of Reality Check, NewsGuard’s daily newsletter about how false claims spread — and who’s behind them. She investigates emerging false narratives spreading across social platforms and tracks the growing use of AI systems to manufacture and scale misinformation.

Episode Transcript:

CORY BARKER: It started with a photo of a tray piled high with lobster tails, steak and mashed potatoes. The caption claimed it came from a shelter in New York where, quote, illegal immigrants were dining like kings while taxpayers footed the bill within hours. It was everywhere. Millions of views on X and TikTok, headlines on Partizan blogs and outrage across cable news. Here's the rub. The photo wasn't from a migrant shelter. It was from a restaurant in Las Vegas. But by the time that detail surfaced, the story had already done its job stoking anger, feeding algorithms and deepening the idea that, quote, someone else is getting what you deserve. This example illustrates the path of misinformation. A story starts as a post, mutates through the memes, crosses platforms, and hardens into belief. It's emotional, shareable, and almost impossible to contain. As if this problem were bad enough. The rise of generative AI is making that cycle faster and more convincing than ever. Synthetic images and fake news sites are flooding timelines at a scale we've never seen before. And the result is an information environment where lies often travel further and faster than facts.

MATT JORDAN: To help us understand how this works and what can be done about it. We're here with Sophia Robinson, an analyst at NewsGuard and senior editor of Reality Check, NewsGuard’s daily newsletter that tracks how false claims spread and who's behind them. Sophia and her team investigate the narrative shaping our information ecosystem from Kremlin backed propaganda to health hoaxes to chi generated disinformation campaigns. We'll talk about what she's seeing right now, how misinformation moves and what it's doing to public trust. Sophia, welcome to News Over Noise.

SOFIA RUBINSON: Great to be here.

MATT JORDAN: So, tell us a little about NewsGuard and why this organization started to publish the Reality Check newsletter.

SOFIA RUBINSON: Of course. So NewsGuard has been around since 2018. We call ourselves the global leader in information reliability. We do many different things at our company. One of our biggest products is that we produce reliability ratings for all of the top news and information websites on the web right now. We have these criteria that are apolitical based on journalistic practices, and we apply those standards equally to all of the news information websites that we rate. But what I work on and what you mentioned is called Reality Check. It's our public facing arm. It's our newsletter on Substack that was started about a year ago, and we publish all the different types of content there but mainly focused on false claims that are spreading online and how our readers can protect themselves from falling victim to false claims. So, some of the more standard that we do are just straight debunks of false claims that are spreading online. So those are claims we try to do that you're likely to come across in your own social media feed. Obviously, we're in an information system where there's corners of the internet that are very prone to false information, but even the standard social media user who's not really politically leaning, just trying to use social media connect to connect with friends, is still going to come across very viral and sometimes very harmful false information. So those are the types of claims that we like to debunk in our newsletter. We also detail how those claims originated categorized the spread. So, what types of accounts tend to be spreading this myth. You know how are they doing that. What's the medium? We do different types of audits especially AI chat bots. So, for example, we just put out a report about OpenAI's new Sora 2 model where we tested that system's ability to produce false claims on topics in the news. So, we do many different types of things, but all of it is centered around really understanding the information ecosystem that we're in right now.

MATT JORDAN: During the Biden administration, the threat to foreign disinformation was acknowledged, and they put, resources in to counter foreign information manipulation and interference, at the State Department, recognizing that it was really easy way to manipulate the media and kind of spread propaganda. That center was successful. They had launched a number of initiatives. And they basically had, about two dozen kind of countries working with them to do this. Then things shifted, right? And the Trump administration, pushed by Big tech, basically ignored the law and defunded that, and framed it as a threat to free speech. How has this, shift in priorities and chill examining disinformation impacted your work at NewsGuard?

SOFIA RUBINSON: I'd like to think that it's had no impact on our mission and what we do. We continue to be monitoring foreign disinformation with the same tenacity. You know, if anything, we've ramped up our efforts as threats from countries like Russia, Iran, and China have ramped up as well. We have experts covering those domains who produce detailed reports about the types of misinformation and disinformation campaigns that these entities are launching. We recently identified a new campaign by a pro-Russian network of websites to try to infect AI chat bots with disinformation. These were mainly on claims that were not covered anywhere else. So, they were trying to really fill the information void that is a vulnerability of AI chat bots. And we found that, it's called the Pravda network, that these AI chat bots had an increased likelihood of citing that Pravda network which has been linked to, Russian influence operations in an increasing amount in the preceding months. So obviously there's a lot of political discourse going on, and it has affected a lot of organizations that have worked with the government in the past. But being a private company has kind of insulated us from some of those attacks. And, you know, we're still able to do our reporting independently.

MATT JORDAN: But you have been attacked by the head of the FCC, though, right?

SOFIA RUBINSON: That's true. Yes. There's been a lot of critique of NewsGuard from definitely from conservatives, but also from liberals. We get it from all sides. A lot of those critiques have focused on our ratings of websites. So, like I mentioned before, we have, you know, these very detailed reports that we put on rating news and information websites online, apolitical journalistic criteria. We have some standards about how sites disclose their perspective and their opinion, but there's no, penalty for having a perspective. There's many conservative sites that actually receive a perfect score from NewsGuard. There's many liberal sites that also receive a perfect score from NewsGuard. And there's also many liberal and conservative sites that do not receive perfect scores. So, we go through my checks to make sure that there's no bias in our reporting.

CORY BARKER: I'm curious, what sort of things have evolved since you've been with the company about its approach to false claims and disinformation?

SOFIA RUBINSON: I think the biggest shift is the focus on AI. We're seeing, you know, obviously foreign actors taking advantage of AI, both their ability to produce and manufacture falsehoods at scale, but also in their efforts to try to infect Western chat bots with disinformation campaigns. But we're also just seeing a lot of spread of claims that are relying on AI-created images and videos, which is something we really didn't focus on too much two years ago. A lot of the claims back then were, you know, mainly misunderstandings of political arguments, you know, documents that were photoshopped or being purposely misrepresented. But now with, you know, the rise of AI, that's definitely increased the amount of false claims that we're identifying. And we obviously, as I mentioned, have, you know, an interest in protecting AI chat bots from falling victim to believing false claims. So, that's ramped up our efforts around, making sure that we're able to debunk claims at scale.

CORY BARKER: Would you say for you all that AI is the most pressing issue as a focus on the dissemination of false claims?

SOFIA RUBINSON: I'd say so. Obviously, there's many different threats in the information space, but the scale at which AI is becoming more believable, is able to produce content in just a few minutes that can be used to really spread these false claims very widely. And even just in, you know, the last two years, obviously AI is not brand new, we saw it back then as well, but there used to be a lot of noticeable irregularities that people could be informed about. So, whether that be an extra finger or, you know, speech irregularities or motions in the screen that don't make sense. Those are going away. So, we're seeing a lot of confusion online. This seems to be an area where people want clarity. But it's hard because it really looks so believable. So, this has been a major focus that we've shifted towards.

MATT JORDAN: Who are the most active disinformation spreaders, either in terms of foreign disinformation campaigns or even entrepreneurs who are using this as a way to kind of achieve a kind of a passive income?

SOFIA RUBINSON: Great question. So, I mean, we track a lot of Russian disinformation campaigns. One is called Storm-1516. It's run primarily by a man named John Mark Dougan, who NewsGuard was one of the first organizations to identify by name. And also, we maintain communication with him where we’re able to learn a little bit about how he's able to manufacture these false claims and his strategy. He was just a little background. He was a fugitive from the US. Now he's seek refuge in the Kremlin, and he's running this very sprawling, network of websites that produce false claims and also use AI to disseminate them across social media. So, that's a very interesting area that we've been covering. And, he's obviously one of the major spreaders of false claims that we've been tracking. But of course, there's a lot of anonymous websites and social media accounts that appear to be using AI to disseminate false claims for profit, as you mentioned. So, you know, there's obviously two different motives that, these campaigns either have, it’s either political in order to influence the ideology of the viewer or the reader. And there's also financial. So, we've identified what we call unreliable AI-generated websites, which are websites that are able to be run almost completely with AI. They produce sometimes thousands of articles a day that advance very topical claims that are usually false. What we've noticed a lot of them do is they'll use Google Analytics to see what types of queries people are searching for, and then they will use those terms in their articles in order to generate engagement. And these websites are littered with advertisements. It's almost sometimes hard to read them because there's so many ads on the page, and sometimes they're major brands that we've that we've spotted on these sites. But using like Google ads and programmatic advertising, which places their ads automatically on these websites without their knowledge, these big brands are actually able to, in a way, sponsor these websites that have been the source of a lot of false claims we're seeing.

CORY BARKER If I can kind of take us back to process for a second. I just looking at a recent post in the newsletter about anti Zohran Mamdani accounts falsely tying his electoral victory in in New York City to ISIS. Can you just give us a sense of okay, these posts are going around online. How do you and your team capture those posts, think about how you're going to contextualize them and explain the false claims to your audience or things taken out of context? Can you just walk us through that a little bit more?

SOFIA RUBINSON: Of course. So, we have full time staffers who monitor X, websites that we know are prone to spread misinformation, Facebook, Instagram, all of the major channels that we know false claims tend to spread on. And once we identify a false claim like the one that you just mentioned. So, that was a claim that was spread through a fake news release put out purportedly by ISIS supporting Zohran Mamdani and saying that they were going to launch an attack in New York City on Election Day. Once we identify that that claim has some spread and we can, we have different tools that we're able to do to search by image or by key phrases to be able to find different instances of that. And it reaches our threshold of the amount of views that would qualify it as being a viral claim, we'll start to investigate whether or not that's authentic. So, in this case, that required us reaching out to some experts who study the Islamic State, and they were pointing us to some very obvious discrepancies between, this statement, which was spreading mainly on X and real statements from ISIS. So, both in terms of formatting and also in terms of the language used and the motive that they have to write these statements, they wouldn't typically ever released a statement prior to an attack. Usually, it's taking credit for an attack that already occurred. So, all of those things we have to do the reporting process, which, in this case, it took a little bit longer because we had to actually reach out to experts and wait to hear back, but in some cases, it doesn't take too long. For example, it's an AI-generated video. We have different software that we're able to put it in in order to detect that. We also have experts in our team who are trained to be able to spot minor discrepancies. So those take less time to definitively debunk. But once we know that a claim is provably false, then we'll start to look into where did it first spread? Sometimes we're not able to determine that, but when we are, it's usually very helpful to, you know, the story of this narrative, the motive behind it. In this case, in the Zohran Mamdani ISIS claim, we traced its first or what appeared to be the first instance of it on 4chan, which is a, you know, an extremist platform that is very prone to conspiracy theories and hoaxes. That's where we found that statement first, spreading and from there, it really took off.

CORY BARKER: If you’re just joining us, this is News Over Noise. I’m Cory Barker and I’m Matt Jordan. We’re talking with Sofia Rubinson, an analyst at NewsGuard and senior editor of Reality Check, about how misinformation spreads and why it’s getting harder to spot. One of the things you mentioned there was, threshold for virality. What is that process like for when you're deciding to move forward with coverage of an issue?

SOFIA RUBINSON: So, it's not a hard and fast rule that we have. But I would say that it depends also on the risk of harm of the claim. If there's a claim such as this one, that ISIS was planning to attack New York City, that obviously has a very high risk, very high impact. So, even if we didn't see too much spread of the claim, you know, maybe it's not getting millions of views. Maybe we're seeing a few posts with a few thousand views. That might still be a claim that we would choose to cover or sometimes we see claims that are a little bit more… I don't want to, like, silly in a way. So, for example, there was a claim that spread after Election day that Trump put out a Truth Social post using an expletive to refer to the American public that he was upset about the election. That is a, you know, a relatively low-risk claim, however, that went very viral, getting millions of views across many different platforms. So, that was a claim that we also chose to cover. So, it really depends on the risk of harm. And if the harm is high, the views and the engagement could be a little bit lower for us to cover it.

MATT JORDAN: D you ever think about intent?

SOFIA RUBINSON: We think about the motive of false claims quite frequently. And that's, you know, another factor that goes into risk, I would say, which is, I think one of the assessments that we do when we're deciding whether or not to enter a false claim into our database. So, you know, sometimes these claims have no political motivation or financial motivation. Sometimes they're just people who are looking for answers. Other times it's just I saw this study. It seems really interesting. Let me post about it and then they'll start a false claim without even realizing it. It's just a complete misinterpretation of the data or of a document that's, you know, has confusing wording. So, in those cases, those aren't necessarily as high risk. What we consider more high risk is when there is some kind of either financial or political motivation behind the claims. So, whether that be, you know, in health claims, I would say it's not necessarily as common for there to be a motivation to try to harm people, even though oftentimes that is the outcome. These are often people who are just distrustful of institutions who feel disenfranchised or, you know, who have felt left behind and are just clinging to different pieces of evidence that they can. But when it comes to, you know, foreign state actors, we see a lot of times not only will they try to advance a certain agenda, but sometimes they just want to confuse and to create chaos. You know, sometimes we look at a false claim that's come out of Russia and we're a little bit confused, like, why would they advance this? This doesn't seem to advance Russia's place in the world, as it doesn't seem to undermine Western values. Why are they putting this out? But sometimes the answer is they're just trying to create confusion and make us not know what to believe. So that way when we see anything, even if we see something that's accurate and provably true, we don't believe that. So, there's a lot of different motivation, and we're always thinking about that when we're reporting.

MATT JORDAN: What's the most viral thing that you have, like in terms of the reach, like, say, a Kremlin story, how many people will that reach if it goes viral?

SOFIA RUBINSON: Those can reach millions of people. Obviously, there's more tropes that are very common in all of these, especially Russian disinformation. So, for example, one of the tropes that we see time and time again is that Ukrainian President Volodymyr Zelensky is corrupt and using Western aid for his own personal gain. So, that narrative as a whole has attracted tens of millions of views over time. But we track specific claims on that. So, for example, there'll be a claim that he purchased a villa in Italy for $8 million. So, that specific claim may receive fewer views, maybe a few hundred thousand, but still a very significant number.

MATT JORDAN: You mentioned tropes, and one of the things that's always fascinated me as a media historian who has studied foreign disinformation and domestic propaganda, is sticky narratives. These narratives that we see not only just across time, like The Protocols of the Elders of Zion as, right, these original narrative about, you know, the Jewish puppet masters pulling the strings. How have you seen those type of things pop up again and again?

SOFIA RUBINSON: Definitely. So, like you're mentioning with this specifically with, you know, Israel and with Jewish people, that's definitely a trope that we see time and time again. Sometimes they'll be specific false claims or claims that are completely baseless, I should say. So, for example, after Charlie Kirk was murdered, there was a big online narrative that it was Mossad was behind the assassination. That's something we see whenever there's a major event like that that's tragic. Even the… like, sometimes school shootings, people will say, Mossad’s behind it, with no evidence, citing… they're not citing really anything other than just making the claim. So, those we see time and time again, I would also say another common conspiracy theory that we're seeing a resurgence of is 9/11 claims. A lot of those are also completely baseless or outright false, but they seem to be making their rounds. And these have a very firm base of believers who continues to spread those claims. So, these tropes just keep reappearing and reappearing and, you know, with AI being very accessible as well, we see that that's being used to further either they make fake evidence to support these narratives or just to really spread the claims wider.

MATT JORDAN: So, for a long time, you know, when people were trying to figure out what's true and what's false, they would go to U.S government sources, sources like the Surgeon General or the Attorney General, and they were often fairly reliable sources of information. Several of your stories over the last year have pointed to a shift in this. What has Reality Check team found in relation to that? And how has how do you think this is a challenge to journalists at large?

SOFIA RUBINSON: It's a big challenge. So, like you mentioned, we used to use government sources as sometimes the only debunking material that we would rely on. That really made us rethink that approach. You know, we acknowledge that maybe that wasn't the best way to go about it. And now we have a higher standard, you know, especially about health claims. We try to make a very diligent effort to not just rely on any one source, but, you know, to talk to experts, people in the medical field, authors of studies to, you know, that's one of the biggest areas of misinterpretation when it comes to health information online, is people will say, oh, this study proves that X causes Z. And then when we talk to the study author, they'll tell us, you know, that's not what my study says.

MATT JORDAN: What was the most viral health misinformation story that you found this year?

SOFIA RUBINSON: I mean, just in general, I think that claims that vaccines are dangerous and can cause cancer or like other ailments, has been another trope that we see. There’s been a lot of different ways that people make this claim. Sometimes it's very general, just saying that the measles vaccine causes cancer or the COVID vaccine causes cancer, but we also see a lot of people try to pin that on evidence, you know, and make their point sounder by saying, “Oh, this study proves that 1 in 10 people who've taken the COVID vaccine have developed this type of cancer.” When, you know, again, when we talk to the study authors, oftentimes we'll learn that that's not true or when we investigate the background of the study, oftentimes people cite, studies that are not peer reviewed, that are put out by advocacy groups. And it's very easy to be confused. We often don't think that these types of claims are put out by people in a malicious way. Oftentimes, these are people that are either, you know, struggling with the health crisis themselves or their family members, and they want answers and they want to be able to say, this is what's causing this illness. And, you know, they'll pin it on these pieces of evidence that they're seeing online. But oftentimes the factual basis for those claims is just not there. So, you know, vaccine misinformation, I would say, is the biggest area that we see false claims being made about.

CORY BARKER: To build on Matt's question a moment ago about, you know, the declining trust in institutions like government agencies and then your reliance, maybe individual experts and kind of accumulating those folks as opposed to maybe relying on government agencies. Do you have conversations about, you know, the trust in institutions and the trust in experts that your audience may have when thinking about who to rely on as a source? Because I'm thinking, obviously there is a lot of distrust in government agencies, but there's also, generally speaking, kind of distrust in expertise as well. So, it's obviously you're in a difficult situation and thinking about who do we rely on to confirm or verify and distribute, you know, information about these false claims as you're trying to verify or debunk these things?

SOFIA RUBINSON: It's become increasingly difficult to know. Is this going to be a trustworthy source? Can we… especially when it comes to health claims? I would say it's an area, you know, there's people even in the medical industry, you know, the best doctors can have differing opinions on the efficacy of a drug or a vaccine. So, it is a confusing space when you're trying to make sure that all of our debunks are provably false. I will say that we never… our biggest standard is just not relying on any one source. We always, you know, have to have a minimum, especially with health claims, usually, three different ways that we're being able to say that this is false. And we detail that in our reports as well. But overall, I think it's a good thing, I mean, I overall, I think it's a negative, aspect of society that we're so distrustful of people and institutions that have developed an expertise when people are instead turning to influencers and people who may not have any formal education or expertise in the topic, but it has made us at NewsGuard, really think about the types of sources we rely on, and making sure that everything we publish is as accurate as it possibly can. So, it's I think, you know, we are able to do a better job because of this distrust.

MATT JORDAN: So, it's been a couple minutes since we've mentioned AI, and in this media environment that's dangerous. So, I know you've done a lot of ratings of these various AI tools and some of them seem better than others, and then some of them are just trash, right? In terms of just repeating misinformation claims. So, a lot of people are turning to these tools now. What would you suggest that they do as they turn to these tools?

SOFIA RUBINSON: So, NewsGuard audits the ten leading AI chat bots quarterly. And in our last audit, which was in August 2025, we found that 35% of the time when asked non-leading questions about false claims. So, for example, did Coca Cola, terminate its sponsorship with the Super Bowl? Then 35% of the time, you know, across the board these chat bots were regurgitating the false claim or, you know, saying and not debunking them. So obviously there's a lot of benefits that come with these chat bots when it comes to making a traveling itinerary or, you know, looking for a recipe. But when it comes to topics in the news, we have found that these are not reliable sources of information. They are, you know, our pro not only to hallucinations, but as we mentioned before, there are foreign efforts to try to infect these chat bots, to get them to produce false claims and to advance the narratives of foreign governments. So, at NewsGuard we really try to warn our readers that using these chat bots is great, but you shouldn't be relying on it as your only source of information when it comes to claims about topics in the news. We'll often see that on X, for example, sometimes will be a post making a very outrageous claim, and then a lot of users underneath that post will say, “at-Grok, is this true?” That's like a new trend. And whatever you know, X is I respond with which it's like an automatic response when you put that prompt in, people will take that as the truth. And, you know, sometimes it's great. Sometimes it does provide accurate information that debunks the false claim. But other times it will either, you know, give you an answer that doesn't really make sense, doesn’t really answer the question, or it'll say something that's actually provably false. So, we really caution against using that as your only source of information.

CORY BARKER: So, we like to give our listeners some practical suggestions. So, obviously the newsletter content is often but not always focused on, you know, debunking or contextualizing false stories after they've already circulated. But what do you think our audience can best do to recognize false information, its origins, potential intent, as we talked a little bit about earlier, as they're encountering these things in real time?

SOFIA RUBINSON: I mean, there's a lot of different things that you can yourself with different tools you can arm yourself with in order to avoid or to limit your ability to fall for these falsehoods. Of course, it's impossible to always, be immune from falling victim to them, but I think the biggest and most general, may not sound all that crazy is just to really think about the plausibility. Oftentimes will have a very immediate emotional reaction when we see a claim that's being made, especially if it goes against what we believe or if it confirms what we believe and will, you know, an almost immediate reaction for a lot of people will be to hit that share button without really taking a step back. If it sounds too good to be true, oftentimes it is. And in those cases where, you know, you take a second and you pause before you repost, one of the best ways to just do an initial check is to look at the account or the website that's spreading the false claim or spreading the claim. I should say you don't know if it's false yet. For example, we see a lot of anonymous accounts on social media that, you know, are almost designed for purely engagement. So, they'll post, you know, very outrageous claims and they'll get millions of views. But it's very easy to kind of spot that if you go to the actual account itself. Look at the bio. Is there a name attached? Is there a profile picture? Those are very, like, practical tips that you can do. You know, it's not going to give you a definitive answer. Sometimes there'll be a real person attached to it who puts their name out there, and it's still false, but that's a great first step. You know, look at if you're on a website. Look at the about page. You know, is this a source that you've ever heard of before. Do they list their owner? That's one of our, nine standards of journalistic credibility that we rate our sites on, you know, do they say who is behind the site? Can you see a contact information? Is there an editor that's listed? If those things are not there, this might not be the best source of that information. And then we really encourage people to do then further research. And that can be very simple. Again, it could be turning to ChatGPT and saying I came across this claim, is this true? But then ask chatting to cite sources and then do the same thing with those sources. You know, that's, I think, the most practical tip that people can take into their social media habits.

MATT JORDAN: Sophia, this has been really enlightening and entertaining and helpful. And so, I want to thank you for joining us here. And keep up the good work.

SOFIA RUBINSON: Thank you so much for having me.

MATT JORDAN: That’s it for this episode of News Over Noise. Our guest was Sofia Rubinson, analyst at NewsGuard and senior editor of Reality Check. To learn more, visit news-over-noise.org. I’m Matt Jordan.

CORY BARKER: And I'm Cory Barker.

MATT JORDAN: Until next time, stay well and well informed. News Over Noise is produced by the Penn State Donald Bellisario College of Communications and PSU. This program has been funded by the office of the Executive Vice President and Provost of Penn State and is part of the Penn State News Literacy Initiative.

[END OF TRANSCRIPT]

Episode Credits:

Producer: Lindsey Whissel Fenton

Audio Engineers: Mickey Klein, Scott Gros, Clint Yoder

News Over Noise is a co-production of WPSU and Penn State’s Bellisario College of Communications. This program has been funded by the office of the Executive Vice President and Provost at Penn State and is part of the Penn State News Literacy Initiative.

Tags
News Over Noise: Season 4 News Over NoiseNews Literacy
Lindsey Whissel Fenton, MEd, CT, is an Emmy Award-winning filmmaker, international speaker, and grief educator.
Matt Jordan is head of the Department of Film Production and Media Studies in the Donald P. Bellisario College of Communications at Penn State University, and Director of the News Literacy Initiative.
Cory Barker, PhD, is an assistant teaching professor in the Film Production & Media Studies department and co-host of News Over Noise