ISACA finds only 35% of cybersecurity teams involved in AI policy development

Written by

Published 25 Oct 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Internet Touch Screen photo and picture

As artificial intelligence (AI) adoption accelerates, only 35% of 1,800 cybersecurity professionals were consulted on policy development, ISACA’s latest survey shows. Presented at ISACA’s 2024 Europe Conference, these findings reveal a widening security gap that may leave organizations more vulnerable due to the lack of cybersecurity oversight.

“In our thirst to innovate and adopt AI very fast in order to create a new product or service or improve customer experience, we usually focus on the ethical or compliance side of AI without taking into account cybersecurity, which is key,” said Chris Dimitriadis, ISACA’s Chief Global Strategy Officer.

Meanwhile, 45% of respondents reported they had no role in developing, onboarding or implementing AI solutions. Many organizations now allow AI to play a role in routine tasks, but bypassing cybersecurity professionals could lead to increased exposure to risks. The findings point to a need for AI policies that incorporate cybersecurity perspectives from the outset.

Untapped Potential in AI-Driven Security

While many organizations adopt AI-backed solutions to automate detection (28%) and strengthen endpoint security (27%), there remains untapped potential in fraud detection (13%) and routine task automation (24%). According to Erik Prusch, CEO of ISACA, “They seem like two logical areas to expand from, but I don’t think we’ve scratched the surface on it yet.”

“I love the idea of being able to be more systemically in control using technology if we can point it in the right direction. And if we can utilize it within our internal systems to give us greater comprehension, instead of periodic reviews,” he added.

Chris Dimitriadis emphasized the crucial integration of AI into cybersecurity tools. These tools cannot be audited, secured, and privacy-protected if no AI system enables them. Without such integration, organizations miss out on the potential for AI to bolster their resilience against emerging threats.

Cybersecurity Gaps Threaten AI Goals

This is especially true in the UK, where only 13% of organizations are considered resilient to cybercrime. A report from Microsoft and Goldsmiths, University of London, warns that the UK’s AI ambitions could be undermined without stronger security measures. The report estimates that improving cybersecurity could bring a £52 billion benefit, reducing much of the current £87 billion in annual cyberattack costs.

AI-supported cybersecurity measures show clear advantages; organizations using AI for defense are reportedly twice as resilient to incidents. However, Nikesh Arora, CEO of Palo Alto Networks, noted that “every second day, we hear about a ransomware attack,” emphasizing the need for effective cybersecurity strategies to match AI’s rapid integration.

ISACA’s findings signal a call for AI governance that involves cybersecurity professionals from the beginning. As reliance on AI grows, embedding cybersecurity into AI policies will be critical for managing risks and securing the benefits of AI-enhanced technology.