ai robocallartificial intellgienceCourt BattlesdeepfakesFCCFeaturedNew HampshireNew Hampshire primarynew hampshire robocallNewsPolicy

Political consultant indicted in fake Biden robocall in New Hampshire

The Democratic political consultant who admitted to using a deepfake of President Biden’s voice in a New Hampshire primary robocall earlier this year was indicted Wednesday.

Steve Kramer, who said he created the robocall to warn about the dangers of artificial intelligence (AI), was charged with bribery, intimidation and suppression, according to local outlet WMUR, which first reported the charges.

The call was the first known use of deepfake technology in U.S. politics, sparking a tidal wave of calls to regulate the use of AI in elections. The fake Biden voice in the call encouraged thousands of New Hampshire primary voters to stay home and “save” their votes.

“This is a way for me to make a difference, and I have,” Kramer told NBC News in February. “For $500, I got about $5 million worth of action, whether that be media attention or regulatory action.”

The consultant previously worked for Rep. Dean Phillips’ (D-Minn.) long-shot presidential campaign, which was suspended in in March, though he said Phillips’ team was not connected to or aware of his robocall effort. He also backed efforts to regulate the technology.

“With a mere $500 investment, anyone could replicate my intentional call,” Kramer said in a statement in February. “Immediate action is needed across all regulatory bodies and platforms.”

The Federal Communication Commission (FCC) announced Wednesday that it will consider requiring political advertisers to disclose the use of AI on television and radio. 

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a statement Wednesday. 

The FCC banned the use of artificial intelligence in robocalls earlier this year after Kramer’s effort in New Hampshire.

AI is “supercharging” threats to the election system, technology policy strategist Nicole Schneidman told The Hill in March. “Disinformation, voter suppression — what generative AI is really doing is making it more efficient to be able to execute such threats.”

AI-generated political ads have already broken into the space with the 2024 election. Last year, the Republican National Committee released an entirely AI-generated ad meant to show a dystopian future under a second Biden administration. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic.

In India’s elections, recent AI-generated videos misrepresenting Bollywood stars as criticizing the prime minister exemplify a trend tech experts say is cropping up in democratic elections around the world.

Sens. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) also introduced a bill earlier this year that would require similar disclosures when AI is used in political advertisements.

The Hill has reached out to the New Hampshire attorney general’s office for comment.

Source link

Related Posts

Load More Posts Loading...No More Posts.