Despite years of security training, enterprise employees clicked on malicious links nearly three times more often in 2024 as artificial intelligence (AI) helped scammers create increasingly convincing attacks, according to research released today by cybersecurity firm Netskope.
The study shows more than eight out of every 1,000 workers now fall for phishing attempts monthly, marking a 190% increase from 2023.
“Gone are the days when data security was an afterthought. It must be seamlessly integrated into every aspect of an organization’s operations,” says Ray Canzanese, director of Netskope Threat Labs.
Attackers are shifting away from traditional email-based scams. Fake websites now look almost identical to real ones, so even careful workers struggle to spot the difference. Search engines account for 19% of all malicious clicks, as criminals manipulate search results and place deceptive ads. Shopping websites follow at 10%, technology sites account for 8.8%, and business-related platforms account for 7.4%.
Cloud applications emerged as prime targets, representing 27% of all phishing clicks. Microsoft services proved particularly vulnerable, with attackers mostly focusing on Microsoft Live and Microsoft 365 credentials. Compromised accounts often appear on illegal marketplaces. Buyers will sell them to other hackers or use them to ransom businesses.
“The variety of phishing sources illustrates some creative social engineering by attackers. They know their victims may be wary of inbound emails but will much more freely click on links in search engine results,” the researchers noted.
The rise in successful attacks coincides with the widespread adoption of AI tools in the workplace. The report found that 94% of organizations now use AI applications, up from 81% in 2023, with ChatGPT leading the way. This increased familiarity with AI may contribute to users letting their guard down when faced with AI-generated content.
Companies fight back with new safety tools. Some coach workers in real time about suspicious links, while others use data loss prevention (DLP) solutions to control data flow into GenAI apps. However, as scammers continue to improve their tricks, these defenses might not be enough.
These consistent trends suggest the issue will persist into 2025. As more companies use AI tools, the risks continue to grow. The average company now uses nearly 10 different AI apps, creating more ways for attackers to gain access.
Experts recommend a fundamental shift in security strategy. Organizations must move beyond simple email filtering and periodic training. Without better protection, the number of successful attacks will likely keep rising. As Canzanese puts it, “The common thread for organizations working to safely enable the use of apps in the enterprise and mitigate the challenges across the threat landscape is the need for modern data security.”