If you described your symptoms to me as a business leader and I entered them into ChatGPT, would you expect me to generate and prescribe a treatment plan for you without consulting a doctor first?
What if I offered you a deal: the world's top data scientists would join your organization, but every one of your business experts must go to your competitor, leaving you with data but no experts to provide context?
Are You Ready for AI Agents?
In today’s AI-driven landscape, voices abound discussing the opportunities, risks, and best practices for integrating generative AI, particularly language models such as GPT-4 or Bard. Daily, we witness announcements of new open-source models, research breakthroughs, and product launches.
Amidst this rapid development, there is a focus on the capabilities of language models. Yet, language is only effective when paired with knowledge and understanding. For instance, if someone memorizes all the words related to chemistry but lacks foundational knowledge, that memorized language remains ineffective.
Getting the Recipe Right
Language models can mislead as they can generate content without genuine understanding. If asked to create a new recipe, for example, they might analyze previous recipes for correlations but lack the intrinsic knowledge of what tastes good. This can lead to unusual combinations like mixing olive oil, ketchup, and peaches — unlikely due to their absence in prior datasets, not because the model possesses actual culinary expertise.
Thus, a well-crafted recipe from a language model is statistically derived, thanks to the input from culinary experts. The key to effective language models lies in the integration of expertise.
Expertise Combines Language with Knowledge and Understanding
The phrase "correlation does not imply causation" resonates with data professionals, highlighting the risk of inaccurately linking two unrelated phenomena. While machines are proficient at identifying correlations and patterns, true expertise is required to discern causation and guide decision-making.
In our learning journey, language is merely the starting point. As children develop language, caregivers impart knowledge about their environment. Eventually, they grasp cause and effect, linking actions like jumping into a lake to outcomes. By adulthood, we internalize complex structures of expertise that interweave language, knowledge, and understanding.
Recreating the Structure of Expertise
When exploring any topic, possessing language without knowledge or understanding does not equate to expertise. For instance, I may know that a car has a transmission and an engine with pistons, but my understanding of how they work and the ability to fix them would require hands-on experience—an area where I lack expertise.
Translating this into a machine context, language models without associated knowledge or understanding should not make decisions. Allowing a language model to operate independently is akin to giving a toolbox to someone who only knows how to predict the next likely word related to cars.
Harnessing Language Models by Recreating Expertise
To effectively use language models, we need to start with expertise and reverse-engineer the process. Machine learning (ML) and machine teaching focus on conveying human expertise into machine-readable formats, enabling machines to inform or make decisions autonomously, thereby enhancing human capacity for nuanced decision-making.
A common misconception about AI and ML is that data is the most critical element. In reality, expertise holds that position. If a model lacks the guidance of an expert, what valuable insights can it derive from the data?
By identifying patterns that experts recognize as beneficial, we can translate that knowledge into machine language for autonomous decision-making. Thus, the process begins with expertise and works backward. For instance, a machine operator may recognize certain sounds indicating necessary adjustments. By equipping machines with sensors, this expertise can be translated into machine language, freeing up the operator for other tasks.
Identify Critical Expertise
When building AI solutions, organizations must determine which expertise is most crucial and assess the risk associated with losing that knowledge versus the potential benefits of automating related decisions.
Is there a single employee crucial to a particular process? Can routine tasks be offloaded to autonomous systems to provide employees with more time? Following this assessment, organizations can discuss how to translate high-risk or high-upside expertise into machine language.
Fortunately, the groundwork for expert systems is often already established. Language models can leverage existing expertise programmed into them.
Exploration to Operations
In the coming decade, the market landscape will shift based on organizations' investments in AI. For a cautionary example, consider Netflix, which introduced streaming in 2007, leading to Blockbuster's bankruptcy just three years later, despite Blockbuster's early efforts in the same arena.
When competitors unveil advanced AI applications, it may be too late for others to adapt, especially given the time and skills required to develop robust solutions.
By 2030, companies that choose to react rather than innovate could find themselves irrelevant, akin to Blockbuster's fate.
Instead of waiting for others to catch up, business leaders should proactively explore what unique market positions they can create, prompting competitors to scramble for answers.
In this era of autonomous transformation, organizations that prioritize transferring operational expertise to machines and envisioning future market dynamics will solidify their market positions.
Brian Evergreen is the founder of The Profitable Good Company. This article was developed in collaboration with Ron Norris, Director of Operations Innovation at Georgia-Pacific, and Michael Carroll, VP of Innovation at Georgia-Pacific.