Musk shares Kamala Harris deepfake, sparks misinformation and content moderation worries

Written by

Published 31 Jul 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free X Logo on Smartphone Stock Photo

Elon Musk shared a deepfake political ad of Vice President Kamala Harris on X, sparking outrage and concerns regarding the power of artificial intelligence (AI) for misinformation and its influence on the upcoming election.

The video posted last Friday, July 27, features an audio clip with a voice generated to sound strikingly similar to Harris, making bold claims such as her being a “deep state puppet” and deeming herself unfit to run the country and being a “diversity hire.”

The video was known to be an altered version of Harris’s campaign video.

Many users flocked to Musk’s blatant disregard for the platform’s policies against synthetic media—policies that clearly state that manipulated media must be labeled to help people understand the context behind it.

X’s Policies Fail to Enforce

While the original poster of the video labeled it “Kamala Harris Campaign Ad PARODY,” Musk reposted it with the caption, “This is amazing,” and a laughing emoji.

Despite Musk’s post lacking the required context under his own company’s policy, the video was neither flagged nor removed by X.

Misleading posts on the platform are usually given context by the “Community Notes” feature, allowing users to provide background or fact-check content. On that Friday night, a few notes were suggested by several groups.

“This is an AI-generated video of Vice President Kamala Harris using audio clips that were never actually stated by the VP,” one Community Note proposed.

To this date, none of the notes were added to Musk’s post or the original’s. As the video garnered over 100 million views, calls for accountability intensified, raising serious questions about enforcing the platform’s policies under Musk’s ownership.

Actions and Reactions

Critics, including Senator Amy Klobuchar who introduced legislation to ban deceptive deepfakes of federal candidates, highlighted the dangers of Musk potentially releasing fake AI media throughout the entire election season, affecting all parties.

Additionally, many also recalled the fake robocall purportedly featuring President Joe Biden telling his voters not to vote during the primary elections.

California Governor Gavin Newsom expressed his disapproval on X, stating, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”

Musk later defended his position and responded with a snarky remark, asserting, “Parody is legal in America.”

Linda Yaccarino, the chief executive of X, has yet to publicly release a statement regarding Musk’s post, igniting a fire of concerns about her leadership, which may involve navigating her boss’s controversies while managing the platform’s reputation. Yaccarino’s recent interviews have been observed as awkward and evasive, further fueling skepticism about her.

The person who originally posted the video, Chris Kohl, known for making conservative-leaning online content, confirmed he used AI to build the voice but did not reveal which program he used.

He also said that “leftists need to relax” and that the people criticizing the video are “trying desperately to find a way to attack Elon Musk.”

Experts argued that as powerful people like Musk continue disseminating false information, more stringent laws are necessary to safeguard the integrity of elections and public discourse.