Untested AI solutions spark costly failures for 68% of businesses, study warns

Written by

Published 19 Sep 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Group of People Sitting Near Table Stock PhotoPhoto by Christina Morillo

As companies rush to integrate artificial intelligence (AI) into their operational facets, a new study from Leapwork warns of a hidden flaw: 68% of businesses are facing costly breakdowns in performance and reliability, exposing the cracks in AI’s untested promises.

Many companies see AI as essential for automating various aspects of software development in an effort to keep up with the rapid updates in technology. The promise of AI-powered code assistants, managers, and generative tools has attracted significant attention, with Gartner predicting that 75% of enterprise software engineers will be using AI code assistants by 2028. 

However, the Leapwork study reveals the risks that come with AI-powered workflow.

The research was based on feedback from 401 respondents, including C-suite executives and technical leads, with 85% of the total respondents having integrated AI apps into tech stacks in the past year.

Robert Salesas, CTO of Leapwork, emphasized the limitations of AI and the need for thorough testing. He stated, “For all its advancements, AI has limitations, and I think people are coming around to that fact pretty quickly. The rapid automation enabled by AI can dramatically increase output, but without thorough testing, this could also lead to more software vulnerabilities, especially in untested applications.”

Testing and Security Challenges in AI Applications

One of the most common issues reported by respondents was integration failures, accounting for 21% of identified bugs in AI apps. Security vulnerabilities were also a significant concern, with 23% of respondents expressing worries about potential breaches. 

The survey also reveals gaps in testing resources and practices. Only 16% of companies rated their testing processes as efficient. About 24% of organizations do not have a dedicated team or individual responsible for testing AI apps, while 26% lack a commercial testing platform. Additionally, nearly a third of respondents (30%) expressed doubt that their current testing practices could guarantee reliable AI applications.

The Enduring Importance of Human Testing in AI Solutions

These findings should encourage a nudge to CIOs and CTOs to move beyond traditional, isolated testing methods with a holistic approach that thoroughly vets every application and user journey in this era of AI-powered solutions. 

Christian Brink Frederiksen, CEO of Leapwork, noted the significant number of outages this year alone, many of which affected millions of customers for big brands. “We’ve been given a wake-up call no one can ignore,” he said. 

As the software industry grapples with the inefficiency of AI in coding and testing, there is a growing recognition of the enduring value of human testing and handcrafted software. While automation and AI-based tools have their place, they cannot replace the critical thinking and expertise that human testers bring to the table.