A recent global survey from DataRobot reveals that only 34% of practitioners of artificial intelligence (AI) feel fully prepared for the convergence of generative and predictive AI, in the context of meeting their organization’s goals.
The survey, titled “The Unmet AI Needs Survey,” was conducted from May to August this year with nearly 700 AI leaders and practitioners across different industries. It showed a clear gap that is preventing AI from widespread adoption despite so much investment being poured into it.
As Michael Schmidt, Chief Technology Officer at DataRobot, pointed out, “There is a widening gap between current tooling and what practitioners need to feel confident in AI outputs. Despite billions of dollars poured into AI, outcomes have been inconsistent.”
Gen-AI and Operational Deficiencies Stalling Adoption
One of the most significant pain points revolves around monitoring and observability, with 45% of respondents stating that they struggle to ensure the reliability of AI models once deployed. This issue is pervasive across organizations, regardless of AI maturity. The complexity of monitoring outputs in real-time, alongside concerns over performance, hinders practitioners from trusting the systems they build.
Generative AI development and deployment were also mentioned by 35% of respondents as a problem. Many found it difficult to build interfaces for generative AI applications and ensure the expected quality of AI outputs. Generative AI is still seen as a “black box” by many practitioners, leading to hesitancy in its adoption.
One ML engineer reflected on the frustrations of working with generative models, saying, “How do I know if this generated content is good enough?”
The third major obstacle, cited by 27% of those surveyed, lies in implementation and integration. AI teams report that their work is bogged down by the fragmented nature of the tools available to them, with excessive time spent troubleshooting. Practitioners, especially those using AI solutions from hyperscalers, reported excessive troubleshooting and a lack of interoperability.
A data scientist involved in the study noted, “We have to go through model risk management, compliance units, and it’s just a bit of a hassle.”
Furthermore, teams face difficulties in collaboration when projects pass through multiple departments. Fragmented workflows lead to delays, especially during the handoff between data scientists, engineers, and other stakeholders. Around 20% of respondents ranked collaboration as a critical unmet need, emphasizing that better integration across teams is essential to AI success.
Convergence and Hybrid Development
The document also considered the prospects of AI, with 90% of the respondents predicting that predictive and generative AI will converge in a year. Also, 53% of respondents favor a hybrid mode of AI development employing both code and graphic user interface (GUI) methods.
Aside from that, there are many other blockages to consider. Regulatory challenges, technical limitations, and cultural resistance will likely hinder full-scale AI deployment. In fact, according to Everest Group, up to 90% of AI proof-of-concept pilots may never make it to production.
The results of this survey highlight a stark reality: Despite its rapid development, AI is still plagued by fundamental barriers. To truly unlock AI’s potential, organizations must address these unmet needs by investing in better tooling, fostering collaboration, and ensuring that their AI teams are supported with the right resources.