Less than two years after the release of ChatGPT, enterprises are increasingly interested in integrating generative AI into their operations and products. A recent survey by Dataiku and Cognizant, which surveyed 200 senior analytics and IT leaders from enterprise companies worldwide, reveals that many organizations are investing significantly in exploring or implementing generative AI use cases.
However, the journey toward full adoption and productivity is fraught with challenges, presenting opportunities for companies that provide generative AI services.
Significant Investments in Generative AI
Recent survey results, announced at VB Transform, underscore the substantial financial commitments toward generative AI initiatives. Nearly three-fourths (73%) of respondents plan to invest over $500,000 in generative AI in the next year, while almost half (46%) will allocate more than $1 million. Notably, only one-third of the organizations have a dedicated budget for generative AI projects; more than half are relying on funds from other departments such as IT, data science, or analytics.
The impact of this financial commitment on departments that could benefit from these budgets remains unclear, and the return on investment (ROI) is yet to be established. Nonetheless, there is optimism that the growing value from advancements in large language models (LLMs) will justify these costs.
"As more LLM use cases emerge, IT teams will need tools to monitor performance and costs effectively to optimize their investments and identify issues early," the study notes.
Another survey by Dataiku reveals that enterprises are exploring various applications of generative AI, from enhancing customer experiences to streamlining internal operations like software development and data analytics.
Persistent Challenges in Implementing Generative AI
Despite enthusiasm, integrating generative AI is complex. Survey respondents indicated facing infrastructure barriers in utilizing LLMs as desired. Additional challenges include ensuring regulatory compliance with laws such as the EU AI Act and navigating internal policies.
Operational costs also pose a significant hurdle. Popular hosted LLM services, including Microsoft Azure ML, Amazon Bedrock, and OpenAI API, simplify generative AI implementation by abstracting technical complexities. However, their token-based pricing complicates cost management for CIOs overseeing large-scale AI projects.
Alternatively, organizations can opt for self-hosted open-source LLMs, which may effectively meet enterprise needs and significantly reduce inference costs. Yet, this approach requires substantial upfront investment and in-house technical expertise, which many companies lack.
Tech stack complications further complicate adoption, with 60% of respondents reporting the use of over five distinct tools or software for each stage of the analytics and AI lifecycle, from data ingestion to MLOps and LLMOps.
Data Challenges
The rise of generative AI hasn't eliminated the pre-existing data challenges in machine learning projects. Data quality and usability continue to be the foremost concerns for IT leaders, cited by 45% of respondents, followed by data access issues at 27%.
Organizations often possess substantial data assets; however, their data infrastructures were built prior to the generative AI era, without accounting for machine learning needs. Data frequently resides in silos and disparate formats, necessitating preprocessing, cleaning, anonymizing, and consolidation before it can be utilized effectively. Data engineering and ownership management remain critical obstacles for many AI initiatives.
"Despite the myriad of tools available, organizations struggle with data quality and usability, meaning that the data must be fit for purpose and cater to users’ needs," the study states. "Ironically, the biggest modern data stack challenge is not particularly modern at all."
Opportunities Amid Challenges
"Generative AI will continue to evolve, with new technologies and providers emerging," said Conor Jensen, Field CDO of Dataiku. "How can IT leaders remain agile in this shifting landscape while maximizing value production from generative AI?"
As generative AI transitions from exploratory projects to foundational elements of scalable operations, service providers can enhance tools and platforms to support enterprises and developers.
With the technology's maturation, opportunities will arise to simplify the technical and data stacks for generative AI projects, reducing integration complexities and allowing developers to concentrate on problem-solving and delivering value.
Enterprises can also ready themselves for upcoming generative AI advancements, even if they’re not yet engaged with the technology. By running small pilot projects and experimenting with new solutions, organizations can identify pain points in their data infrastructure and policies while developing in-house expertise to better harness generative AI's full potential.
As the field evolves rapidly, it’s crucial for enterprises to ensure their generative AI initiatives are future-proof. According to Jensen, the initial step is to create an infrastructure layer connecting models, data, and users.
"Decoupling LLMs and other models from service layers allows companies to manage everything consistently, treating LLMs as plug-and-play components," he explained.
Ultimately, a robust process for evaluating model results is essential to ensure enterprises use the most effective model appropriately. However, this remains a developing area.
"Currently, LLMOps is still in its infancy, which could hinder the widespread adoption of the numerous LLMs available," Jensen concluded.