According to U.S. officials, agents working with the Chinese and Iranian governments had fake, AI-generated content ready to release in 2020 so they could influence the American presidential election.
The National Security Agency found evidence of China and Iran’s capacity to produce deepfakes, a source told CNN.
“The question becomes how quickly can we spot an anomaly and then share that rapidly within the United States,” a former senior U.S. official told CNN. “Are we winning the race against a series of adversaries that might operate within the US? That’s the challenge.”
“Leading adversaries of the United States — China, Russia, and Iran — are strategically utilizing AI to diminish U.S. influence in targeted regions by employing tactics such as disinformation campaigns and the advancement of unmanned military capabilities,” Concentric noted last November, adding, “China now has at least 130 large language models (LLMs), accounting for 40 percent of the global total and just behind America’s 50 percent share.”
In November 2021, the Department of Justice charged “two Iranian nationals for their involvement in a cyber-enabled campaign to intimidate and influence American voters, and otherwise undermine voter confidence and sow discord, in connection with the 2020 U.S. presidential election.”
CLICK HERE TO GET THE DAILYWIRE+ APP
The FBI issued an alert in March that stated:
Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months. Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft. …
Russian, Chinese, and Chinese-language actors are using synthetic profile images derived from GANs, according to multiple private sector research reports. These profile images are associated with foreign influence campaigns, according to the same sources.
The FBI discussed how to ascertain whether an image could be fake:
Visual indicators such as distortions, warping, or inconsistencies in images and video may be an indicator of synthetic images, particularly in social media profile avatars. For example, distinct, consistent eye spacing and placement across a wide sample of synthetic images provides one indicator of synthetic content. Similar visual inconsistencies are typically present in synthetic video, often demonstrated by noticeable head and torso movements as well as syncing issues between face and lip movement, and any associated audio.