Public Media for Central Pennsylvania
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Democracy Works: How AI is shaping the news

Pam Brunskill, Sean Marcus, Jenna Meleedy
Photos provided.
Pam Brunskill (News Litearcy Project) Sean Marcus (Poynter Institute), Jenna Meleedy (National Association of Media Literacy Educators)

For this episode, we're collaborating with our colleagues at News Over Noise, the podcast from Penn State's News Literacy Initiative. We recorded this interview on October 27, the first day of U.S. Media Literacy Week, an event that highlights the power of media literacy education and its essential role in education all across the country.

The conversation focuses on how AI complicates and already-complicated news literacy landscape. With so many news outlets and journalists striking out on their own as content creators, it can feel overwhelming to keep up with the news each day without the added complication of having to question whether what you're seeing is real or computer-generated "AI slop." Despite those challenges, our guests are confident that each of us has the skills and ability to separate factual information from AI slop and make good decisions about how we consume the news.

You'll hear from Democracy Works host Jenna Spinelle, News Over Noise host Cory Barker, and the following guests:

Episode Transcript
Jenna Spinelle
Hello and welcome to Democracy Works. I'm Jenna Spinelle. This week we are tackling the topics of AI and media literacy, as if keeping up with the news wasn't hard enough these days. Now you add AI and AI slop into the mix, it can feel absolutely overwhelming at times, but never fear, we have three experts. That's right, three guests this week to help make sense of what's going on and give us all some good, practical advice about how to navigate this world of AI when it comes to news and information. So joining us are Sean Marcus from the Poynter Institute and MediaWise. Pam Brunskill from the News Literacy Project, and Jenna Meleedy from the National Association of Media Literacy Educators. So lots of expertise in this room, and we're thrilled to be doing this episode in collaboration with News Over Noise, the podcast of Penn State's News Literacy Initiative. So in this interview, you'll hear questions from me and from News Over Noise host Cory Barker, and Cory will kick off the conversation. So let's get to it.

Cory Barker
Welcome everybody. Thanks for joining us today. Thanks for having us. Thanks. Good to be here. So I want to start with a broad question for each of you, what kind of outreach or campaigns are your organizations doing to educate the public about AI, and in those campaigns, what particular groups are you targeting? Sean, let's start with you.

Sean Marcus
Yeah, actually today we launched the alt ignite series that was in partnership with the in collaboration with the McGovern Foundation, that's a series of AI literacy courses and lesson material that is geared towards educators, library workers, civic leaders, journalists, essentially, anyone who's engaging with AI, and that is everyone. So we just put that out on our media wise site, and we've got our traditional media wise material, the teen fact checking network.

Pam Brunskill
At the News Literacy Project. We have a number of resources on artificial intelligence. Our main audience always is going to be K through 12, two of our biggest resources are going to be our technology lessons for elementary we have it's called for elementary search and suggest algorithms. And then for six through 12, we have Introduction to Algorithms, which goes over the concept of what algorithms are, how they underlie generative AI and what that entails. We also have a number of TikToks and videos and posts on social media related to short form teaching about AI. We have some posters, and we have a curated page on newslet.org dedicated to AI so people can find all our resources.

Jenna Meleedy
So NAMLE has a few resources on our website, parent friendly, teacher friendly, guides to navigating conversations about AI in the classroom and in the home. In the past, we've hosted an AI summit in person, and then this fall, our Youth Advisory Council is leading a session about helping other youth navigate how to use AI at our youth summit in Nashville.

Jenna Spinelle
So I want to just take a step back from talking about AI specifically for a minute and oriented within the broader framework of news literacy. We are recording this on the kickoff of US news and media literacy week. And I think the 2016 election was one of the things that brought news literacy to the forefront. It brought a lot of things to the forefront for a lot of people, but media literacy was certainly one of them. So I wonder if you could just maybe orient AI within the broader concept of news literacy, or media literacy.

Pam Brunskill
I can start at the News Literacy Project. We have five core competencies, or five main standards that we suggest individuals look at to get savvy with news literacy. And the first three are related to differentiating news from other types of information. That's standard one. So are you looking at news, raw information, propaganda, etc? Standard two is about the importance of a free press to American democracy and the role of a free press. Standard three is identifying characteristics of credibility. So using standards of quality journalism to recognize when something is credible or aspiring to credibility and Ethics and Standards four and five is really when we get into AI and that's really important to recognize, because you're not really going to have a solid understanding of AI if you don't have the foundation of recognizing what news is in relation to others, the importance of it and signs of credibility. So standard four is about verifying, analyzing information, recognizing when something is a piece of misinformation, which. AI is a form of misinformation, and then standard five is about civic participation. So it's using that the knowledge and skills you gain from standards one through four, and applying it. What is your responsibility and role in the world in relation to AI? So let's say you recognize this piece of content on your feed. Is AI generated? What are you going to do about it? Are you going to spread it, or are you going to call it out? What is what is it you think you should be doing? Yeah, I would.

Sean Marcus
I would kind of throw in there. And it's interesting that you dropped the dates, because, you know, media wise, formed through the Pon Institute in 2018 right? Really fairly directly out of the 2016 election, ahead of the 2020 election. And it was because of that that perceived and really true need for more media literacy, more media literacy education. Then we hit the 2024 election, and the prediction was that we've got the AI election that is coming, and it never completely formulated that way, and we're still waiting for the AI election to appear. But what we really found as AI was sort of coming out was that, you know, we started with some of these detection and verification skills that were unique to AI, that quickly went away as AI improved. And then we, we immediately realized we fall back on like you were saying you're, you know, those five pillars that you've got, those those traditional verification frameworks and those traditional detection methods for any type of misinformation, it runs true for AI. So I think a lot of it is using the frameworks we have for media literacy in many senses, to put our audiences at ease. You know, in the face of this brand new crazy technology, it's like, yes, it is new and crazy, but it's still the same thing we're still dealing with interpreting information. So we can apply those same ideas to this new technology.

Jenna Meleedy
Yeah, I completely agree. I think it's clear that future leaders are going to need to know how to act in a world that's been transformed by AI, however that might look like. And so at NAMLE, we want everybody to have the media literacy skills that they need to thrive now and in the future. And that is defined as the ability to access, analyze, evaluate, create and act upon all forms of communication, definitely including AI.

Jenna Spinelle
Pam, I want to bring up something that you talked about in the talk you gave today, earlier on on campus. I think there's lots of talk certainly, about mistaking fake content that's AI generated for something that's real. But you brought up something called the liar's dividend and that, why don't you tell us what that is and how that plays in here?

Pam Brunskill
Yeah, the liar's dividend is when it comes about with AI and our convoluted information ecosystem. Now, when people are mistaking real content for made up and bad actors will take advantage of that right? Because all they want for a bad actor to succeed, all they need is for you to not be able to trust anything. It's not necessarily that they need you to trust and believe what they are putting out but as long as you don't believe the truth of what's really out there, then they can manipulate you and get you to believe whatever they want, or to throw up your hands and say, I can't trust anything. So I I don't know what to believe. You can go ahead and do whatever you want. And that's really what the liar's dividend is, yeah. And can you give us an example of that? Well, the example I gave in today's talk was from images of the of war. I think it was from Russia, Ukraine, right? So people are, are seeing real images and then discounting them as that's too awful, that's not real. And people are just trying to make me think that that's an atrocity that's happening. So they'll play on my sympathies, so I will support one group over another.

Sean Marcus
I noticed if I can jump in, I have to jump in, because the liar's dividend is, like, one of my favorite ones to teach, because it's so nefarious and sneaky, right? It's like they just jump in there, like, Oh no, no, that's AI. You know, most recently I saw the signal chat leak of the Young Republicans club. The first response that came officially out was, Well, we haven't seen these yet, so we suspect that they've been doctored. I think this phrasing something to that nature. You know, there's a high chance that these could have been manipulated. Or doctored. So you know that sowing confusion can now come because it's a scapegoat, right, for bad actors, or for folks who you know, can now find a new excuse for that terrible thing that they did. Like, Oh, that wasn't real. That was AI. And it's like, just like you said, Pam, it's like, the it's so easy now to believe that something could be fake, it's equally easy to believe that it could be real or fake. So it it gives a certain shield to to to our bad actors. Sometimes it's very interesting.

Jenna Meleedy
And I'll say that once you have so much content to consume and you have to evaluate everything to that extent, you really fall back on what is comforting to you, meaning figureheads that you have always believed in, thought leaders or ideologies that are familiar to you and that really will play into people's confirmation bias when consuming news.

Pam Brunskill
Just gonna say that I'm glad you got to it. Yeah, that the confirmation bias, right? You have to be aware of your own preconceived notions, your own biases, because. Is you are absolutely going to have those activated when we're talking about the liar's dividend. You also another part you were talking about right is breaking news. So at the news literacy project, we teach students how to mind the gap right in that that time span between when an event happens and news organizations can verify information, there's a gap, and that gap is flooded with misinformation and people suggesting and supposing what might happen, and that can shape your perspective. So we want to be aware, right? One of the skills of news literacy is just recognizing there's a gap between when information can be verified. So don't, don't just trust the first thing you see, when, when events are are breaking

Sean Marcus
And I'll jump in if I can't, you know, and that's where the AI literacy layers in on top of it, because we look at something like grok, an AI tool like grok, where, you know, during the Charlie Kirk shooting, you had so many retweets of grok, responses to what was going on in that situation, but it was moving so fast whether grok was using reliable sources or not, those reliable sources were still speculating, filling in the gap, and so we saw a flood of misinformation coming out, then getting verified and reinforced by users who were using grok and AI tool as a verification method for this Breaking news. It starts to just cycle in on itself, and then minding that gap can become so much harder if we're not aware of how AI is contributing to that cycle.

Jenna Spinelle
Yeah, so speaking of newsrooms. You know, my first jobs in newspapers, I was the one tasked with updating the website, because I was the youngest person in the newsroom, and it was very much an afterthought after the paper was already done and put to bed, whatever. So that's a long way of saying. Newspapers kind of famously missed the.com boom, or we're behind on on the the trends and everything that was happening there, and I think we continue to see the ramifications of that today. But I wonder how they're thinking about AI are they cognizant of well, we don't want to let this AI thing pass us by the same way that we kind of let the broader internet pass us by 20 years ago. Yeah. How are they approaching some of these challenges that we've been talking about?

Pam Brunskill
Great question. I can add a little bit to Sean's, which is so several news organizations have signed agreements with AI services, right? So that, you know, AI is trained on the internet, and they can either go and get training data and then not pay organizations, or they can, you know, try to not train them on that, and then they're going to get trained on slop and not necessarily so credible information. So some news organizations have opted to sign agreements that they will get paid a certain amount, and then the AI can get trained on their their information. Other news organizations, I'm thinking back to an article, is probably a year and a half, two and a half years ago now, with CNET, where they had a whole bunch of articles that were just written by AI and didn't disclose, to your point about transparency, and somebody found out, and they're like, this is awful. And so then they had to go through and, and that was a big discussion in the news world of okay, we need to make sure whenever we use AI, we disclose it and, and that has pretty much become a standard now, I think, for Sanders based news organizations, is that they will disclose when they're using AI. And it's, I think it's perfectly acceptable to use AI to summarize, you know, sport scores, things like that, right? But in terms of reporting, you're still going to need people to do to do that job.

Cory Barker
How would you all evaluate the way that the popular press is actually covering AI, whether that means, you know, its adoption within the public a few folks in their presentations today sort of talked about, you know, potential environmental impacts related to the use of AI on, you know, the electrical grid, on water, those sort of things. So how do you feel like the general you know, popular news infrastructure is doing at informing the public about what AI is, some of the potential, you know, strengths and weaknesses. Pam, do you have anything there as far as how you feel like as an, you know, as an educator, this is working?

Pam Brunskill
Well, just to go back to your original question, you just asked Jenna if I could, yeah, hit to that first. So standards based news organizations, right? Are going to make sure that they're verifying the images are accurate, and they're going to use photographs created taken by by photo journalists. And if, if they are using AI. They they say it, and there's got to be a reason for it. I'm trying to think of it an actual example from a standard based news organization, and I can't off the top of my head, where they claimed it was a photo journalist image and not AI. But maybe after we can talk about that, but in terms of main. Stream news organizations covering AI? Yeah, I think there are news organizations absolutely discussing AI, the positives, the benefits and the negatives. But I've seen a whole lot more in technology, publications like The Verge and wired a ton about AI. And of course, because that's their, you know, their niche, since this is not live, I can say, Is this supposed to be? Supposed to be niche or niche? I'm never sure, niche. I say niche. And in terms of an educator standpoint, oh my, okay, so now we're going back, you can spice that out. So in terms of education, it's a really tricky landscape, and I hear from educators all the way from K through 12 is changing their students are using AI, my my own kids are using AI, you know, to help them with their homework, to help study. And so there's this, this spectrum of what is acceptable use what is not, in terms of a district, in terms of individual teacher, and it's all over, you know, the decisions and what people think is acceptable is just so different in terms of education, teaching about AI, that's really, really specific to the individual teacher and the individual district I'm Seeing. I'm in a few different social media pages, right, where educators talk about, how do I do this? Can I use AI to help me create a PowerPoint? And so, like, here, here you go, use this. And so teachers are using it to write lesson plans, to create slide decks, to help lessen their workload. So I've heard some teachers use it for feed. You know, get to give feedback to students, not all right, some people will say, No, I'm not okay with that. And then in terms of teaching students about AI, that's really only only some educators feel confident, confident and comfortable doing that. And I'd say majority are not.

Jenna Meleedy
I do see a lot of fear mongering about AI in the news, especially with huge national news outlets, and I think it's because it makes a flashy headline, you know, they they're out there to get engagement, and so pushing the fearful perspective on AI, trying to get people to worry about, what is AI going to do to our kids? That sells, that sells news stories at a time where news is struggling, and so I think at NAMLE we really try and create a more balanced perspective of the of looking at AI as any, as you would, any sort of technology where there are uses and gratifications that are positive For people, and there are ways to abuse AI.

Jenna Spinelle
So Sean, you and I were talking earlier about content creators, and up to this point in this conversation, we've been talking about news organizations, right? So can you just talk a little bit about since we are seeing more and more journalists leaving news organizations to strike out on their own, as well as others who have been on their own and the content creation space the whole time. Just talk about how some of these rules are different or not, or sort of what the state of play is for news content creators.

Sean Marcus
Yeah, and that was, you know, that was a big push with one of the previous pieces that we did, you know, pointer wide on talking about AI, we were very intentional about developing material that was specific for content creators, and training, disclosure, detection, those types of things, because it is a slightly different landscape, not even slightly. It's an extremely different landscape, right? I don't want to necessarily use the Wild West kind of term, but at the same time, the guardrails are off in many ways. If you're an independent, individual content creator, news content creator, whose guiding star do you follow? You know, it's really up to that individual creator to set the tone, to set their own ethical standards. So, you know, this is where you we would always hope that there would be transparent, once again, transparency in what they're doing, that they're taking you along the journey of this is how I'm using AI. This is what I'm doing with it. If we're not seeing that, but we suspect that there is, you know, a whole lot of AI use going on with the content that they're creating, then it's a red flag in terms of the reliability as as a creator. So it's difficult because it's, it's just scattered in so many ways. You know, there is not, necessarily, not, not that I'm aware of a content creators handbook for doing good journalism and doing good reporting. You know, they don't have the SPJ code of ethics that they necessarily have to follow. Not to say that there aren't news creators, individual news creators, out there that are applying those ethical standards, but essentially, we lose those checks and balances and verifications of an editor and a flow and all those kinds of things, which, again, is typical with AI or without but when, when AI comes into the mix, we have that problem of not only needing to trust that individual creator for essentially taking them at their word, but we also need to be able to trust their judgment in terms of how they're interacting with the content and material. So I think it adds a really yet another complicated layer to the to the content creator question, if you will, of being you know, news reporters.

Cory Barker
If I can follow up on that, it does seem like we're in a situation where, as you've said, with a couple of different examples, like potentially calling it a Wild West, we're at a transformative moment, maybe to put a little bit more neutral and different entities within the news or the content industry are using AI, and people are talking about it, and there's a sense of distrust or skepticism about what things are real and what things are not real in your planning at your various organizations. Is there a concern that you know this is this is going to create so much distrust of news and journalism or content creators who use AI or talk about AI, that by the time there is there are more guardrails, or there is more infrastructure to help us kind of know what is real and what's not, that we're kind of not going to be able to, you know, get to a point where people do trust anymore, like what they're seeing flat out, right? That, like this is going to go on even for a few years, to a point where for a long period of time, there's no sense of trust of what we're seeing online, just because we can't figure it out fast enough. Pam, what do you think?

Pam Brunskill
Oh, I think wholeheartedly people can, can determine what information is credible or not, and navigate this information landscape. Yeah, AI is making our feeds get filled with a whole bunch of slop, right? But that doesn't mean we can't tell what is credible and what is not. It's it's recognizing those five core standards, right? Are you looking at news? Are you looking at something else? Because if it's something else, you know, if it's meant to entertain you, fine, but if you're looking at news, there's certain things you want to be looking for, right? As Sean said, You want transparency, you want accuracy, you want reliability, you want context. These are all things that you would be looking for. And if that, if it's a content creator or a media organization, these entities, these these points, will still be in place. You mentioned society for professional journalists. They have a code of ethics, most, if not all, standards based news organization follow some form of the SPJ ethics code. And so what if you're looking at a standard space news organization, they have their ethics code on their website, or you can call and find out what they are, and if they don't live up to those standards, you can call them out on it, and they will have to make corrections if they make mistakes, right and and if a journalist or reporter makes a mistake, they have to issue corrections. If it's egregious, they can lose their jobs. That doesn't happen for a content creator. And so when we're looking at our feeds mixed with, you know, a standards based news organization, post a content creator, post a friends, post, right? All intermingled. We have to start by recognizing, first of all, what am I looking at? Who's posting it, how credible is that source? And then we can get into the points of, is there a watermark that says it is AI generated? And are there hashtags that gives us a little more content, and if it's about a breaking news event and right then standards based news organization and other organizations will be covering it, we can do some lateral reading. Go, go, find a new search and read more about it.

Jenna Meleedy
Yeah, I think it's really easy, especially as somebody who really wants to keep up to date with the news, to fall into that nihilistic, cynical perspective, but we're really not as helpless in this quest for the truth as it sometimes feels like there are so many strategies for reducing the noise and limiting your active news consumption to sources that you have researched and trusted, and just ways to take care of your Mental health that can really aid in making news consumption a lot more beneficial for you.

Jenna Spinelle
And picking up on that. Jenna, I want to talk about news avoidance that you are folks who can't see you. Are the youngest person in this room by 15 years. I want to say at least. And so talk about, you know what you just said, Right? Seeking out new resources that your research and that you trust like that is easier said than done, especially for folks in your generation who have come of age in the news environment over the past decade or so. So talk about how, how you get folks to do that, whether it's in, you know, your personal life, or some of the I know you, you work with a lot of high school students at NAMLE, like, what are some of the strategies you use that maybe our listeners can can adopt for the younger people in their lives?

Jenna Meleedy
So from a digital native perspective, who works with a lot of other digital natives, which, if you don't know, it just means you've grown up with the Internet. I. Yeah, it really is about mental health and well being first, because a lot of young people will hear the words politics or news, and they will shut down immediately. And that comes from a lifetime of growing up with catastrophically stressful news all the time, just constant, or near constant exposure to content online that is specifically made to upset people and provoke an emotional reaction. So naturally, what are you going to do if every single time you try to consume news, you get so stressed out you can't function, you're going to shut down. You're going to become desensitized to the things that you see or hear, to violent imagery, you know, human things that would normally provoke a reaction, they have to get more extreme just to get that same reaction. And so that results in a lot of apathy and cynicism among young people. And I think that the first step to combat that is to reduce the overstimulation and the overwhelming nature of the news that doesn't mean no technology, that doesn't mean no news, that doesn't mean no social media. It is different for every person, the way that they reduce their own over stimulation when it comes to technology, for me, that means that I have to force myself to spend 30 minutes every morning trying to consume the news, seeking out topics, and I have to do it in a podcast, because if it's visual and audio, that's too overwhelming for me, or I have to do it while I'm working on some other tasks, just to fit it in my day. That's what works for me. But what I have been seeing among young people a lot is a tiredness when it comes to consuming information online. They don't go true to traditional news sources as much as older generations, because they're looking for escapism. They're looking for entertainment, which means that they turn to social media, which then will coincidentally give them political content that they weren't even expecting to see. And so that ends up being their only news consumption, their news avoidant. And then the only pieces of news that they're getting are stressful, algorithmically optimized bits and pieces of Real News. The first thing that I encourage people to do, people of all ages, but especially young people, is to not take for granted, grounding themselves. You hear about Doom scrolling, which is when you get sucked into a cycle of scrolling on Instagram or Tiktok or whatever platform you're on for hours and hours, and you completely lose track of time. It's very addictive taking moments to ground yourself physically, I like to remind people, is there tension in your jaw? Is your tongue stuck to the roof of your mouth? Are you sitting in an awkward position? Do you need to adjust your posture, keep track of where you are mentally? Do you feel overstimulated? Do you feel bored? You went onto this platform for entertainment? Are you getting that from it? And then there are some more practical things that you can do. You can reduce notifications so that you're not getting random notifications of huge disasters that have happened across the country throughout your day, because that will completely over stimulate you and ruin your mood. You can train your algorithm so that it works for you instead of the other way around. By pressing the button on the content that says not interested or do not recommend you can block profiles that are spreading AI slop or other forms of stressful content, click bait misinformation, and I think an important and underrated one is just finding opportunities to talk about the news with other people now that news is a less communal thing, meaning you're not sitting in the family room Watching the seven o'clock news with your family and you family anymore, you're in your own bubble. You're stuck in your own individualized algorithm. News is a more isolating event than ever, and I think when you break that isolation and you try to talk about the news with other people, it becomes less stressful. It becomes more grounded, more realistic, and you feel a sense of community that takes away some of that stress.

Cory Barker
It sounds to me in the last few minutes of the convo that you're all in different ways, pretty optimistic about our ability to navigate the AI ecosystem, or the AI influenced kind of news ecosystem. Are there other things beyond what we've shared here in the last few minutes that, like you're really optimistic about as far as our ability to get through this and figure out the best ways to use AI as producers, as consumers, as people who kind of fill both of those roles, depending on the day or the practice that we're fulfilling.

Sean Marcus
Can I flip that question on its head a little? Head a little bit? Yeah? Like, I actually want to very honestly say that the past year I've lived in I said this to everybody, I have lived in a full state of existential dread because of the amount of time that I've had to engage with ai, ai technology, AI literacy and our understanding of it. But. I will say, to contribute to that, to that to that sense of optimism, is that acknowledgement that we are in a very tenuous and strange place right now. We're at the beginning of what could potentially be like a 500 year run of technological advancement. Maybe it's a fad and it fizzles out. I doubt it, but acknowledging the fact that, you know, we have a lot of discomfort. We have a lot of unknowns. We have a lot of really shaky, weird stuff, for lack of a better way to say it going around right now with AI, when we engage with it, most of us are sitting in a spot where, you know, for students, it's like, ooh, Am I cheating? Am I not cheating? Nobody has defined those lines for me yet, because those lines are not defined. So they're sitting with the angst of, like, should I be doing this? Should I not be doing this? Reporters are sitting with the angst of, you know, how is this impacting my credibility? I mean, anybody who's engaging with it, which, like I said before, is everybody at this point, you know, is sitting with some level of angst about it. I think that once we sort of open that box up and discuss and acknowledge the fact that like this is equal parts exhilarating and dreadful, you know, and we continue to drill down on both those sides of that road, I think that helps us to feel at least a little bit more balanced. You know, we've, we've got our our sea legs on a really rocky boat, if you will. So I think there's, there's really an element to just leaning into the uncertainty and the discomfort of it, not necessarily, and accepting it and being okay with it, but just accepting the fact that, yeah, this is, this is a weird time right now for technology and communication, and We have to, you know, collectively acknowledge that,

Pam Brunskill
Yeah, I had one thought while you were speaking, which is individually right? We all have our own agency, things we can do, but we also have a more collective ability to shape AI and AI policy. So if we are more as a society, I don't know about government, intervention. I haven't figured out my thoughts on that yet, but if we can collectively advocate for incentivizing credible information and labels on AI, right and specifics that we would would want our social media companies to put out for us, right, and if we collectively advocate for that and incentivize regular, not necessarily regulation, but expectations for AI that that will, you know, prioritize your credible content and deprioritize misinformation, so that we get good stuff at the top of our feeds And and not. And collectively, we ask for the not so polarizing emotional content, because otherwise we're just going to keep getting fed it, because we know that keeps us on longer, but we have agency as a as a society, that we can ask for something else.

Jenna Meleedy
Yeah, so younger people, I've found do not share the same dread or alarmist sentiments about AI that many other people do, and I think that's just because for us, it's kind of just another convenient tool. It's a fact of life. It's something that we don't take very seriously. You know, we've got that cynicism, that dark sense of humor. We use it to make memes. And so the bottom line is, AI is incredibly popular right now, and it's only going to continue to grow in popularity. So instead of fearing it, and instead of trying to fight back against it and limit it and restrict people from using it, why not find out how to work with it and use it in a way that has ethics in mind to minimize harm as much as possible and maximize the benefits for people. There's got to be a balance.

Jenna Spinelle
I'm going to take us on a brief detour into the world of democracy that I inhabit and tell you about something called a dummymander which is in the world of gerrymandering, where I just learned this term, like two episodes ago on our show, where you gerrymander something so much that it doesn't work anymore. The district just becomes too sliced and diced, and voters don't behave the way that the models think they would. So I think there might be something similar, or potentially similar, happening with AI where, you know, you're applying for a job and you have the, you know, AI generated resume, interacting with the AI recruiter, and all of these things, right? So I guess what are? What are some of the ways that AI could implode in on itself and and how likely do you think it is that that might happen?

Sean Marcus
I mean, I can kind of jump in on that. I think that, you know, the example of the AI bot evaluating the AI bots resume is a really good example of that type of interaction that we can see happening and we can see on social media, you know, a bot responding to a bots post like, eventually the human. Who are consuming it are going to catch on. And it's kind of like, like, you said, Pam, like, collectively, we're all going to jump in and say, like, Okay, this is this flood is enough, you know. So really, to put it all together, to answer that question, it's that, you know, that optimism of what AI can do to help and to do great things in the future, we hope that that's sort of the baby we save with the bath water we throw out of like the social media bots that are flying around, you know. But I could definitely see if, you know, the push for AI is a fully unregulated, whether that's government regulation, whether that's some kind of a lot, you know, tech Alliance regulation, something of that nature. I think that can be a really huge contributing factor to just this free wheeling over use of unethical practice that then eventually, yeah, the people will rise up and say, enough is enough. We're tired of watching bots are, you know, bicker with bots. It's just becomes absolutely silly. I think there's some potential for that. I don't know. Like, I say it's, it's tough to predict the future. And I also want to kind of throw in that, that idea of like, you know, we, I think when we say the future, we tend to think 20 years down the road, 30 years down the road, but the future is a big, wide open space. You know, everybody just like, gives me so much grief over it, but I always use that, that sort of printing press period of like, you know, the printing press ruled for a good 500 years before technology came in. So we're talking about being somebody who was around around, like 1495 to 1505 trying to predict what 1978 was going to look like. You know, we're in that. We're in the 1495 to 1505, range. We have no idea what you know that. You know 400 300 years down the road is going to look like. So I think there's a temperance we have to apply in determining whether it's going to implode, explode, whatever, because in the next 2030, years, it'll do whatever it does, and then 100 years will pass.

Jenna Meleedy
Yeah, I think why not use that to our advantage? I do think AI and especially AI slop, is kind of accelerating internet fatigue, or social media fatigue. There's a meme that's like, that's enough internet for today, and it always comes after you've seen some sort of horrific amalgamation of AI slob, and so if that's what it takes to stop us in our doom scrolling cycle, to lessen addictive tendencies that we might have with social media or other forms of media, why not? Why not use that as an excuse to take a break? And I also want to add that it is a kind of crazy and cool concept to think about the dead internet theory of a majority of the internet just being bot activity, and while I do think that that has potential to be the end of some social media sites that could be overrun with AI and bots, I will say I don't share the fear that some people Have that AI will replace people widespread in across multiple job markets. Of course, with any emerging technology, some jobs will become outdated, but I tend to think of it as more. A lawyer will not be replaced with AI, but a lawyer who uses AI might replace a lawyer who does not use AI.

Pam Brunskill
I'm really torn on this one. I want to be I am optimistic that we can navigate the information environment, but in terms of the future, there's always so much unknown. Things that can happen, right? Can change the trajectory. But when I listen to the experts, the people who are really studying AI, I want to say, like almost all of them, are really doomsdayers, and so that scares me, but I hold back to this is where we are right now. It doesn't mean we're going to get to that doomsday scenario. So we have to take our part and speak up with what we think the ethics of AI should be, and follow suit and and use AI responsibly.

Cory Barker
I think that's a great place to leave it. Sean, Pam, Jenna. Jenna, thank you all.