A new cybersecurity threat dubbed “LLMjacking” is costing organizations up to $100,000 per day as attackers exploit stolen cloud credentials to abuse enterprise large language models (LLMs), according to research from Sysdig’s Threat Research Team.
“An attacker is looking to use your organization’s resources, though in this case, they’re looking for access to a large language model (LLM),” said Crystal Morin, cybersecurity strategist at Sysdig and former U.S. Air Force intelligence analyst.
Financial and operational impacts
Earlier this year, victims faced average losses of $46,000 per day. By mid-year, expenses exceeded $100,000 daily as attackers began targeting advanced AI models like Claude 3 Opus. This surge coincided with a tenfold increase in malicious LLM requests in July.
With a single script, attackers can target 10 artificial intelligence (AI) services, including AWS Bedrock, Anthropic, OpenAI, and others. Their tactics have evolved from simply using available models to actively enabling new ones in victims’ accounts. Some attackers even attempt to disable logging features to conceal their activities.
Sysdig also highlighted a rise in phishing and social engineering attacks enhanced by AI. These schemes leverage LLMs to craft convincing, personalized messages tailored to individual victims. This exploits personal data, such as shopping preferences or workplace details, to compose emails that will seem legitimate at first glance.
“They’re going to send you a message from this restaurant that’s right down the street or popular in your town, hoping that you’ll click on it,” Morin added. “So that will enable their success quite a bit. That’s how a lot of successful breaches happen. It’s just the person-on-person initial access.”
Predictions for 2025
Morin predicts that supply-chain attacks facilitated by AI will increase by 2025. Attacks are expected to start with spear-phishing campaigns generated by LLMs, which could compromise vendors and service providers, leading to widespread disruptions. She pointed to the damages from the Change Healthcare ransomware attack as an example.
“Going back to spear phishing: imagine an employee of Change Healthcare receiving an email and clicking on a link,” Morin said. “Now the attacker has access to their credentials or access to that environment, and the attacker can get in and move laterally.”
Furthermore, AI is anticipated to aid cybercriminals in bypassing multi-factor authentication systems and employing voice-cloning technologies for scams.
“Threat actors are learning and understanding and gaining the lay of the land just the same as we are,” Morin explained to The Register. “We’re in a footrace right now. It’s machine against machine.”
Sysdig researchers recommend organizations implement stronger credential protection, maintain detailed usage monitoring, and follow cloud security best practices to guard against LLMjacking attacks. As AI preys on enterprise dynamics, unauthorized access to these powerful models presents an emerging security challenge requiring vigilant defense.
“Just be careful what you click,” Morin advised.