Reassessing the AI Hype: Why It's Crucial to Ground Our Expectations in Reality

For the past 18 months, I've closely followed the expanding dialogue surrounding large language models (LLMs) and generative AI. The resulting hype has proliferated, overshadowing the practical applications of current AI tools. This excitement highlights AI's significant limitations while understating how these tools can effectively be utilized for meaningful results.

We remain in the early stages of AI development. While popular tools like ChatGPT are entertaining and somewhat beneficial, they should not be relied upon for comprehensive tasks. Their outputs are often marred by the inaccuracies and biases present in the human-generated data they were trained on. The so-called “hallucinations” frequently reflect human projections rather than genuine intelligence.

Real challenges persist, such as the soaring energy consumption of AI, which poses a risk to our climate. A report recently indicated that Google's AI requires up to 30 times more energy to generate information compared to simply pulling it from existing sources. For instance, a single interaction with ChatGPT consumes as much electricity as a 60W light bulb over three minutes.

A colleague recently argued with sincerity that AI would render high school education obsolete within five years, predicting a future of egalitarianism free from menial labor driven by Ray Kurzweil’s concept of the “AI Singularity.” I would wager that it will take significantly longer—perhaps 25 years or more—to evolve from the current state of generative AI to a reality in which daily chores, like loading a dishwasher, become unnecessary.

Three insurmountable challenges remain in the realm of generative AI. If anyone assures you these issues will be resolved, it's crucial to recognize their naïveté or that they may have ulterior motives. Their optimism mirrors the empty promises once made about cryptocurrency, autonomous vehicles, or the metaverse. They aim to capture your attention now with the hope of profiting later, often leaving disappointed investors in their wake.

Three Intractable Issues

1. Hallucinations

There isn't enough computational power or training data available to eradicate hallucinations. Generative AI can produce outputs that are factually incorrect or nonsensical, rendering it unreliable for tasks requiring precision. As Google CEO Sundar Pichai states, hallucinations are an "inherent feature" of generative AI, which means we can only work to reduce their potential harm.

2. Non-Deterministic Outputs

Generative AI operates probabilistically, meaning its responses can vary significantly each time a question is posed. This inconsistency poses challenges in fields like software development or scientific analysis, where uniformity is crucial. For example, while AI might collaboratively generate a good testing strategy for a mobile app feature, repeating the same prompt might yield completely different responses.

3. Token Costs

Tokens are a fundamental aspect of the AI framework but are often misunderstood. Each query to an LLM is divided into tokens, which form the basis for the AI's responses. A large portion of the investment in the generative AI sector is aimed at reducing these operational costs to promote widespread adoption. For example, while ChatGPT generates approximately $400,000 in daily revenue, it requires an additional $700,000 in subsidies to remain operational. This strategy resembles "loss leader pricing," similar to the initial affordability of Uber before it aligned with conventional taxi rates.

What Works

Recently, I wrote a script to extract data from our CI/CD pipeline and upload it to a data lake. With the help of ChatGPT, I reduced what would have taken my moderate Python skills eight to ten hours to under two—an 80% productivity boost! As long as I verify the information and am not concerned about needing identical responses every time, ChatGPT proves to be a valuable asset.

Generative AI excels in brainstorming, providing tutorials on specialized topics, and drafting complex emails. I anticipate that these capabilities will improve and act as an extension of my abilities in the future. This enhances productivity sufficiently to justify much of the development invested in the technology.

Conclusion

While generative AI can assist with specific tasks, it does not warrant a comprehensive reevaluation of human potential valued at trillions of dollars. The companies that excel in AI application, like Grammarly and JetBrains, thrive in contexts where answers can naturally be cross-checked or where multiple solutions exist.

I believe our investment in LLMs—spanning time, money, effort, and anticipation—far exceeds the returns we will ultimately achieve. The challenge lies in overcoming a growth-at-all-cost mentality and recognizing generative AI as a powerful tool capable of enhancing productivity by around 30%. In an ideal world, that alone would merit creating a sustainable market.

Marcus Merrell is a principal technical advisor at Sauce Labs.

Most people like

Find AI tools in YBX