An image was posted to social media in October depicting Gov. Josh Shapiro in his signature blue suit, white collared shirt and rectangular black-frame glasses.
It’d be difficult to mistake the image for a real-life Shapiro. It has a cartoon style to it. And one of the letters on a sign behind Shapiro, reading “October – Still No Budget,” is nearly illegible.
The fake Shapiro held a clipboard with a two-item to-do list: Next to “get a budget done” is a red “X” mark, while a green check mark is next to “book another out of state fundraiser.” The ad was poking the governor for attending fundraisers in New Jersey and Massachusetts while the state went months without a budget.
The photo wasn’t posted by a random online troll. It came from Pennsylvania Treasurer Stacy Garrity, the Republican Party-endorsed candidate to face Shapiro next year as he seeks reelection.
Garrity has used artificial intelligence to take other shots at Shapiro, some of which could fool a casual viewer into thinking they are real. On Oct. 2, the Garrity campaign posted an image depicting Shapiro in front of the Hollywood sign in California, holding a sign that read, “Newsom Shapiro 2028.”
It was a not-so-subtle jab at the Democrat for his rumored White House aspirations and an attempt to tie him to the West Coast state’s progressive governor, Gavin Newsom.
Fake photographs aren’t exactly a new thing in politics. Congress tried to pass a law against creating them in 1912 after a conman was discovered with a doctored photo of him shaking President William Howard Taft’s hand.
But the use of realistic AI-generated images to target political opponents is expected to grow substantially in the 2026 midterm election year. With a lack of federal protections and President Donald Trump’s efforts to derail state regulations, it’ll be up to voters to parse through potentially misleading information.
“It’s a brave new world,” political scientist Chris Borick said.
At Muhlenberg College, Borick teaches a political philosophy course where he and his students discuss AI use in campaigns. He compared the technology’s increasing prominence in politics to that of television ads in the 1950s and social media in the 2010s.
Borick noted that using AI tools to make a quick meme is “substantially different” than attempting to mislead voters with the technology.
“ I think you’re crossing into different territory, and maybe even different legal ground, than you would if you put out an AI-generated image of an opponent in some kind of cartoonish … artificial setting,” Borick said.
Shapiro said he hasn’t seen any of the AI-generated photos or videos of him when asked about it at a news conference this month. He cracked a smile and said, “I assume they’re not very nice.”
“I can obviously brush off insults from those on the other side,“ Shapiro said. Touting an information literacy toolkit created by his administration, Shapiro said it is important for officials to make sure students can verify content they see on social media.
CAMPAIGNING WITH AI
A study this year commissioned by the American Association of Political Consultants found that 59% of political consultants use AI tools at least once a week, most commonly to brainstorm marketing materials or for internal communications.
Julie Sweet, an AI in politics specialist at AAPC, acknowledged some bad actors are misusing AI, but she sees an opportunity for the technology to lower barriers to running a successful campaign. Essential to that goal, she said, is ensuring that underresourced candidates know how to use AI tools ethically by listing proper citations and avoiding deceptive practices.
“ That puts us in a far better position to combat against the bad actors and the disinformation actors,” Sweet said. “We, the good guys, need to know how to use these tools and use them in a responsible way.”
Garrity and state Sen. Doug Mastriano, the unsuccessful GOP gubernatorial candidate in 2022 who has teased another bid next year, may be the most prominent state officials using AI-generated images as part of their campaign.
Mastriano, who frequently posts about Shapiro, published to his official Facebook page in November an AI video clip of the governor holding a smartphone in front of him, apparently posing for a selfie. The video has Shapiro saying, “Let’s get the light just right. Yeah, that’ll work. Perfect. Hashtag hard at work.”
Neither Garrity nor Mastriano responded to a request for comment.
Kevin Harley, a GOP political consultant at Quantum Communications in Harrisburg, said campaigns have long used AI tools to help parse through registered voter data and choose the most compelling messages to send to certain types of constituents, based on commercially available data they leave behind as part of their digital footprint.
“You know where they’re shopping. You know their neighborhoods. You know, basically, everything about somebody,” Harley said. “Then we layer on top of the voter files.”
Generative AI can then let a campaign quickly create and adapt any images and political messages — like text messages or phone calls — to best suit their target audience, Harley said.
He expects super PACs, which are often banned from directly coordinating with political campaigns, to experiment more often with deepfake attack ads in 2026.
Marty Santalucia, who works at the Democratic Party-affiliated consultant group MFStrategies, has been working with candidates to implement AI tools into their campaigns, including those aimed at enhancing the individuality of a candidate’s contact with voters. He expects many local races to see AI misinformation come directly from candidates.
Still, Santalucia said some of the biggest shifts in AI technology for campaigns remain behind the scenes, and he warned candidates not to overuse many of the most well-known AI platforms.
“People may feel that it’s so easy … and you don’t have to keep the human touch — you just run a ChatGPT campaign,” Santalucia said. “And that’s gonna be a very ineffective, broken system that’s gonna come out of that because they’re going to oversimplify and over-lean on a tool that really works best when mixed with expertise.”
BEYOND CHIT-CHAT
AI chatbots have been used in private industries for years.
But according to Peter Loge, director of the Project on Ethics in Political Communication at George Washington University, political campaigns have been hesitant to implement them into their marketing strategies.
“ Nobody wants to be the first one caught doing the thing,” Loge said. “Everybody wants to be the second or third one doing the thing, but they want somebody else to get in trouble — somebody else to experiment.”
The taboo around using AI has faded, Loge said, and campaigns are taking advantage of how the technology can help their candidate win.
One study published this month in Nature, a peer-reviewed science journal, found that using large language models like ChatGPT, Google Gemini or Claude to engage voters in chatbot conversations was more effective in persuading those voters toward a candidate than traditional marketing strategies.
But researchers also found that while making its arguments for a candidate, especially those who fell on the conservative side, a chatbot often made inaccurate or misleading claims.
The American Association of Political Consultants study found 62% of consultants see “accuracy or reliability” of AI tools as the biggest barrier to adopting them.
“ These tools are not an easy answer,” Sweet said. “They may create efficiency and they may help us be more effective, but there’s still a human who has to be held accountable.”
The long-term impact of misinformation was what concerned Borick most about AI’s political future. He said voters will be forced to gauge the authenticity of photos and videos, a decision that could be heavily influenced by a person’s political bias or motivations.
“ We’re doing that through a polarized environment right now — hyperpolarized,” Borick said. “And when you review things in a polarized world, it’s different than a less polarized world. All those ingredients really make it a pretty complex environment that we’re gonna be dealing with.”
MISUSING AI
Using AI for campaign subterfuge, such as intentionally impersonating another candidate, is not directly prohibited in Pennsylvania. In June, the state House passed a bill requiring campaigns to disclose their use of AI to impersonate another candidate, but the Senate has not taken up the bill.
Advocates described the General Assembly’s bipartisan vote to ban non-consensual deepfakes as a big victory. Signed by Shapiro in July, the new law makes it a first-degree misdemeanor or third-degree felony if the deepfake is executed with malicious intent.
That was only the beginning of a crackdown on the technology, according to state Rep. Joe Ciresi, D-Montgomery, who chairs the Pennsylvania House’s Communications & Technology Committee. He said lawmakers are planning a wide range of AI regulations, especially heading into the 2026 election year.
“Of course, the deepfake one is the big one,” Ciresi said. “But there are others that we have to look at.”
Ciresi said lawmakers are discussing the possibility of an AI omnibus bill, packed with several regulations they see as key for protecting the public.
But Trump signed an executive order this month to undermine and prevent nearly all state-passed AI restrictions. against the wishes of consumer advocates and many members of his own party, including Pennsylvania Attorney General Dave Sunday, who promised to defend Pennsylvania’s AI regulations if challenged by the federal government.
“There must be only One Rulebook if we are going to continue to lead in AI,” Trump wrote on social media, foreshadowing the order. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.”
Trump’s order grants the U.S. Attorney General the power to challenge any state AI laws in court that the Justice Department deems too “excessive.”