Generative AI: Promises and Challenges in Business Adoption
Generative AI is garnering significant attention, highlighted by tools such as Midjourney, Runway, and OpenAI’s ChatGPT. However, businesses remain skeptical about the technology's ability to enhance their bottom lines, as indicated by recent surveys and insights from industry expert Ron Miller.
In a Boston Consulting Group (BCG) survey conducted this month among over 1,400 C-suite executives, 66% expressed ambivalence or dissatisfaction with their organization’s progress in leveraging Generative AI. Key concerns were identified, including a lack of talent, unclear strategic roadmaps, and inadequate frameworks for responsibly deploying the technology.
Notably, executives from diverse sectors—spanning manufacturing, transportation, and industrial goods—still prioritize Generative AI. A remarkable 89% of respondents view this technology as a "top-three" IT initiative for their companies in 2024. Despite this, only about half foresee substantial productivity gains—specifically a 10% increase or more—in the workforces they manage.
These findings, combined with a BCG survey from late last year, reveal a prevailing skepticism within enterprises regarding AI-enabled generative tools. In that earlier survey, over 50% of 2,000 decision-makers expressed apprehension about GenAI adoption, citing fears of potentially leading to poor or illegal decision-making and jeopardizing data security.
Concerns over "poor or illegal decision-making" particularly relate to copyright issues—an ongoing debate in the realm of Generative AI. These AI models “learn” from various examples (like illustrations, photos, and texts) to generate essays, artwork, music, and more, often without compensating or notifying the original creators. The legality of training models on copyrighted material without permission is currently being deliberated in numerous court cases. A significant risk for Generative AI users lies in "regurgitation," where a model produces an exact replica of a training example.
In a recent article published in IEEE Spectrum, AI critic Gary Marcus and visual effects artist Reid Southen articulate that AI systems, including OpenAI’s DALL-E 3, can produce verbatim data even without specific prompts. They point out the absence of public tools or databases for users to assess potential infringement risks or any guidance on how to navigate these issues.
It’s therefore not surprising that a poll by Acrolinx, a content governance startup, revealed nearly a third of Fortune 500 companies view intellectual property challenges as their primary concern regarding Generative AI.
To mitigate IP concerns, some corporate leaders are looking for assurances of legal protection from Generative AI vendors. Companies such as IBM, Microsoft, Amazon, Anthropic, and OpenAI have committed to defending their clients financially against copyright claims arising from their GenAI tools.
Despite these assurances, clarity remains an issue, as noted by Reworked’s David Barry. For instance, if a user generates content likely to infringe, it’s uncertain whether a company like OpenAI would assume liability. However, these protections are certainly an improvement over the previous lack of support that existed.
Concerns regarding data security in the context of Generative AI are proving more difficult to address. Fearing that sensitive data might fall into the hands of AI vendors, numerous firms—including Apple, Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo, JPMorgan, Walmart, and Verizon—have prohibited employees from engaging with public AI tools like ChatGPT. In response, companies like OpenAI have made efforts to clarify their data policies, assuring users they do not utilize corporate data for model training—at least under certain circumstances. Whether these commitments will be enough to win over hesitant enterprise clients is still uncertain.
Due to these complexities and concerns, 65% of executives in the January BCG survey believe it will take at least two years before Generative AI moves past the hype. They indicate that to fully harness its capabilities responsibly, a significant portion of their workforce will need upskilling, and regulations governing AI must be developed in each operational territory.
Outside Europe, regulatory frameworks are unlikely to emerge soon and may evolve as Generative AI technology advances rapidly. On a positive note, the January BCG survey also highlighted those executives who have embraced Generative AI despite the prevailing uncertainties. Among companies planning to invest over $50 million in Generative AI in 2024, 21% have already trained over a quarter of their workforce on AI tools. Furthermore, 72% of these major investors are preparing for impending AI regulations, and 68% have implemented guidelines for ethical use of Generative AI at work.
"This is the year to turn the promise of Generative AI into tangible business success," stated BCG CEO Christoph Schweizer in an email. "Every CEO, including myself, has faced a steep learning curve with Generative AI. Rapid technological change can lead to a wait-and-see approach, but early adopters are already experimenting, learning, and scaling their efforts."