There is little the Biden administration enjoys more than its power to silence conservatives. The past three years under President Joe Biden have been an astonishing journey into an Orwellian landscape of censorship and slander, where the opposition is shut down in any way possible.
The depths of the administration’s censorship of conservative voices have been revealed, with the government’s eyes all over social media platforms, college campuses, and, of course, the mainstream media. The administration has completely controlled the narratives from COVID-19 to the Biden family’s corruption.
Before Biden, the terms “misinformation” and “disinformation” were historically only ever found in dictatorships and third-world countries. Now that the censorship scandal has been blown apart, liberals are searching for new ways to cast doubt on conservative opinions, and it seems that technology has once again given them the answer.
Welcome to a new political landscape where AI is already labeled a threat to “perceived reality.” And its implications are far-reaching, from granting “plausible deniability” to “spreading misinformation.”
Libby Lange from Graphika, a “misinformation tracking organization,” says AI can disrupt the “understanding of truth.” When everything can be fake, and everyone claims manipulation, it becomes hard to know what is true, she notes, cautioning that the lack of a clear truth allows political actors to interpret information in ways that support their agenda.
Hany Farid, a professor at the University of California at Berkeley, suggests that AI introduces a “liar’s dividend.” In the era of AI, when individuals, including police officers or politicians, are caught saying something damaging, they can claim plausible deniability. The technology’s ability to generate realistic fake content makes dismissing genuine instances of wrongdoing easier by arguing that the evidence could be manipulated or fabricated.
Globally, politicians have quickly jumped on the “AI misinformation” train, effectively removing themselves from perceived guilt.
In April, a 26-second voice recording surfaced, allegedly featuring a politician from the southern Indian state of Tamil Nadu accusing his party of unlawfully amassing $3.6 billion. The politician, however, denied the authenticity of the recording, dismissing it as “machine-generated.” Experts are uncertain about the audio’s veracity.
A low-quality video emerged late last year, depicting a Taiwanese politician entering a hotel with a woman, suggesting an extramarital affair. Despite the allegations, commentators and fellow politicians swiftly defended him, claiming that the footage was generated using artificial intelligence (AI). Again, experts have been unable to prove if the video was manipulated through AI.
AI companies have cautioned against using their tools in political campaigns, citing concerns about potential misuse. Recently, OpenAI banned a developer from using its platform after he created a bot imitating long-shot Democratic presidential candidate Dean Phillips. Initially supported by Phillips’s campaign, the bot faced scrutiny after The Washington Post covered the story. Subsequently, OpenAI determined that the use of its technology for political campaigns violated its rules.
AI is becoming a weapon to destroy political adversaries. Earlier this month, actor Mark Ruffalo shared AI-generated images of former President Trump with teenage girls, alleging that they depicted Trump on a private plane owned by convicted sex offender Jeffrey Epstein. Ruffalo later issued an apology for the post. In response to the incident, Trump, who has been critical of AI on Truth Social, posted a message stating, “This is AI, and it is very dangerous for our Country!”
Still, AI gave Trump an excuse to clap back against a Fox News ad featuring some of his verbal missteps, such as referring to the California city of Paradise as “Pleasure” and his momentary inability to pronounce “anonymous.” While he claimed the ad was created with manipulated AI, all the incidents referenced were backed up by prior coverage.
But AI-generated deepfakes, which convincingly replicate a person’s voice and appearance, are becoming more common and widely shared on platforms like X and Facebook. These realistic fake videos are often viral. Unfortunately, the methods to detect whether a piece of media is AI-generated are not advancing as quickly as the technology creating them.
Tech and social media companies are considering systems to check AI-generated content automatically. Currently, only experts can determine if the media is real or fake. Aviv Ovadya, a Harvard “AI impact on democracy expert,” notes increased public awareness of deepfakes. However, as politicians see opponents using AI manipulation as a defense, more will hide the truth behind these claims.
And this is before labeling truth about issues like COVID-19 and Hunter’s laptop as “AI-generated misinformation” even comes into play. One thing is sure: as Biden stumbles into the 2024 election, he will fully embrace this new AI scapegoat to explain his never-ending gaffes on the world stage.