Generative AI is ruining the internet—says Google research paper

BY

Published 9 Jul 2024

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Man Using Laptop wit Chat GPT Stock Photo

In a recently released paper, Google researchers found that generative Artificial Intelligence (AI) is ruining the Internet with fake or doctored content intentionally used to spread false information or deceptive narratives.

“Manipulation of human likeness and falsification of evidence underlies the most prevalent tactics in real-world cases of misuse,” the researchers reported.

The paper, noted by 404 media and titled “Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,” found that the majority of generative AI users are using the technology to ‘blur the lines between authenticity and deception’ through posting doctored AI content on the Internet.

AI’s Digital Truth Distortion

By studying previously published research on generative AI and 200 examples of news articles documenting AI misuse, the researchers found that users’ “misuse” of AI often sounds like the technology is working as intended.

“Most of these [AI content] were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit,” the researchers added.

The study’s findings suggest that the general public’s misuse of generative AI means the technology is doing its job too well, requiring minimal tech expertise to generate doctored content and even less to propagate it on the Internet.

Mirror, Mirror: Google’s Blurred Reflection

Conveniently, the researchers fail to mention that Google itself pushes that same AI-generated misinformation to its large user base, permitting and sometimes being the source of false images and information, which challenges users’ abilities in verifying or fact-checking information.

“Likewise, the mass production of low quality, spam-like and nefarious synthetic content risks increasing people’s skepticism towards digital information altogether and overloading users with verification tasks,” they write.

Disturbingly, the researchers also mentioned instances where ‘high profile individuals are able to explain away unfavorable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways.’

As waves of AI-generated content continue to flood the internet landscape, research like these, while yet to be peer-reviewed, underscores the importance of knowing the role of AI in spreading misinformation.