FBI sounds alarm on scams using deepfakes and AI

Written by

Published 6 Dec 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free scam hacker anonymous mask illustration

The FBI has issued a warning about criminals’ growing use of generative AI tools to enhance their financial scams. These tools are making fraud attempts more convincing and harder to detect, the Bureau said in a December alert. Criminals are now using AI to create realistic messages, fake images, and even clone voices and videos, posing a serious risk to unsuspecting victims.

“The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale, increasing the believability of their schemes,” the agency stated in its PSA.

How genAI boosts scams

The PSA details how cybercriminals have adopted generative AI, like OpenAI’s ChatGPT, to create messages faster with fewer grammatical errors and in different languages. These tools allow them to generate content faster, reaching a wider audience than before. These schemes often include social engineering tactics, such as romance or investment fraud.

“Criminals generate fraudulent identification documents, such as fake driver’s licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes,” the FBI said.

The impact of these AI-powered scams is already significant. Between January and October, the FBI’s Internet Crime Complaint Center received 38,000 reports of investment scams. This resulted in $4.7 billion in losses. It also is a sharp rise from the 30,000 reports and $3.6 billion in losses recorded during the same period last year.

Furthermore, criminals may utilize AI to generate realistic images for impersonation. AI-generated images are often used to populate social media profiles or identification documents of fake identities to appear legitimate at face value.

The Bureau has also reported a rise in the use of deepfake technology to mimic individuals’ voices and videos. In one notable case, a finance worker was duped into paying $25 million to a fraudster who successfully impersonated the company’s CFO during a conference call. In some cases, criminals have used voice cloning to impersonate loved ones, demanding money urgently while pretending to be in an emergency.

Steps to protect yourself

The FBI is urging the public to be vigilant and adopt measures to protect themselves against these scams. They advise people to establish secret words with family members to verify identity. They also encourage closely inspecting images or videos for imperfections and never rushing into financial transactions based on fear or urgency.

“If you open an email or receive a phone call or text message, and your immediate reaction is to feel fear or anxiety, that could be a flag for you that something else is going on,” said Scott Hellman, FBI’s Supervisory Special Agent in San Francisco.

The use of generative AI in scams represents a new chapter in cybercrime. This chapter requires both individuals and organizations to adapt quickly. The FBI aims to heighten awareness and preventive measures as technology continues to evolve. Public awareness, combined with new security strategies, might be the best defense against these increasingly sophisticated fraud attempts.