How to Train Your Workforce to Think Like AI Professionals

If you feel an inexplicable urge to smile at this rock, you’re not alone.

As humans, we often assign human-like traits to objects, a phenomenon known as anthropomorphism, which is increasingly relevant in our interactions with AI.

Are You Ready for AI Agents?

Anthropomorphism can manifest when we say “please” and “thank you” to chatbots or express admiration for generative AI outputs that meet our expectations. However, the real challenge arises when we expect AI to replicate its performance in simple tasks—like summarizing an article—on more complex subjects, such as an anthology of scientific papers. Similarly, when AI provides an answer regarding Microsoft’s earnings and we expect it to conduct market research based on earnings transcripts from multiple companies, we set ourselves up for disappointment.

These tasks, while seemingly similar, are fundamentally different for AI models. As Cassie Kozyrkov explains, “AI is as creative as a paintbrush.” The primary obstacle to productivity with AI lies in our ability to use it effectively as a tool.

Anecdotally, some clients who rolled out Microsoft Copilot licenses later reduced the number of seats because users didn't find them valuable. This mismatch stems from unrealistic expectations about AI's capabilities versus the realities of its performance. We’ve all experienced that moment of realization: "Oh, AI isn’t good for that."

Instead of abandoning generative AI, we can cultivate the intuition needed to better understand AI and machine learning, while avoiding the pitfalls of anthropomorphism.

Defining Intelligence and Reasoning in Machine Learning

Our definition of intelligence has always been ambiguous. When a dog begs for treats, is that intelligent behavior? When a monkey uses a tool, does that display intelligence? Similarly, when computers perform these tasks, can we deem them intelligent?

Until recently, I believed that large language models (LLMs) could not genuinely "reason." However, a recent discussion with trusted AI founders led us to propose a potential solution: a rubric to assess levels of reasoning in AI.

Just as we have rubrics for reading comprehension and quantitative reasoning, introducing an AI-specific rubric could help convey the expected reasoning capabilities of LLM-powered solutions, along with examples of what is unrealistic.

Unrealistic Expectations of AI

Humans tend to be more forgiving of errors made by people. Although self-driving cars are statistically safer than human drivers, accidents provoke significant outcry. This response amplifies disappointment when AI fails at tasks we would expect humans to manage well.

Many describe AI as a vast army of "interns," yet machines can falter in ways that humans do not, even while outperforming them in various areas.

As a result, fewer than 10% of organizations successfully develop and implement generative AI projects. Misalignment with business values and unanticipated costs related to data curation further complicate these initiatives.

To overcome these hurdles and achieve project success, it's essential to equip AI users with the intuition needed to know when and how to use AI effectively.

Training to Build Intuition with AI

Training is crucial for adapting to the rapidly evolving AI landscape and redefining our understanding of machine learning intelligence. While the term "AI training" can feel vague, it can be categorized into three key areas:

1. Safety: Learning to use AI responsibly and avoid emerging AI-enhanced phishing schemes.

2. Literacy: Understanding what AI can do, what to expect from it, and potential pitfalls.

3. Readiness: Mastering the skillful and efficient use of AI-powered tools to enhance work quality.

AI safety training protects your team akin to knee and elbow pads for a new cyclist; it may prevent some scrapes but won't equip them for more challenging scenarios. Conversely, AI readiness training empowers your team to maximize the potential of AI and machine learning.

The more opportunities you provide your workforce to interact safely with generative AI tools, the more adept they’ll become at recognizing what works and what doesn’t.

While we can only speculate about the capabilities that will emerge in the next year, being able to connect them to a defined rubric of reasoning levels will better prepare your workforce for success.

Know when to say “I don’t know,” when to seek assistance, and most importantly, when a problem is beyond the scope of a particular AI tool.

Cal Al-Dhubaib is the head of AI and data science at Further.

DataDecisionMakers

DataDecisionMakers is a platform where experts, including those executing data-related tasks, share insights and innovations in the field. For cutting-edge ideas, up-to-date information, best practices, and future trends in data technology, join the DataDecisionMakers community. Consider contributing your own article as well!

Most people like

Find AI tools in YBX