Classic AI Proves Its Worth Amid the Emergence of LLMs

Reflecting on last November, before the advent of ChatGPT, machine learning primarily focused on creating models to address specific tasks, such as loan approvals or fraud detection. This paradigm shifted with the emergence of generalized large language models (LLMs), yet it's essential to recognize that generalized models are not a one-size-fits-all solution. Task-based models continue to thrive within enterprises.

Previously, these task-specific models constituted the backbone of enterprise AI and remain indispensable. As Amazon CTO Werner Vogels noted in his recent keynote, this approach represents “good old-fashioned AI,” which still effectively addresses numerous real-world challenges.

Atul Deo, general manager of Amazon Bedrock—launched this year to facilitate access to various LLMs via APIs—shares a similar perspective. He emphasizes that task-specific models have not vanished; rather, they have become vital tools in the AI toolkit. “Before large language models became prevalent, we mostly operated in a task-specific domain where models were trained from the ground up for individual tasks,” Deo explained. He differentiated between task models and LLMs by stating that task models aim at specific objectives, while LLMs can extend their utility beyond set parameters.

Jon Turow, a partner at the investment firm Madrona and a former AWS executive, argues that the discussion around LLMs now encompasses new capabilities like reasoning and adaptability beyond their initial scope. “These advancements allow for an extension beyond the originally defined tasks,” he noted, but cautioned that the limits of these capabilities remain a topic of debate.

Like Deo, Turow insists that task-oriented models are here to stay. “Task-specific models offer advantages—they are typically smaller, faster, more cost-effective, and often outperform broader models because they are tailored for specific applications,” he mentioned.

However, the appeal of a versatile model is compelling. “For companies managing numerous machine learning models independently, it becomes impractical,” Deo said. “Opting for a robust large language model not only enhances reusability but also enables a single model to address diverse use cases efficiently.”

Amazon's SageMaker platform remains pivotal in this landscape, targeting data scientists rather than developers like Bedrock does. The platform serves tens of thousands of customers who build millions of models, making it essential to retain this tool despite the popularity of LLMs. Enterprise software, by its nature, resists rapid shifts; organizations do not abandon substantial investments simply because a new technology emerges, even one as transformative as LLMs. Notably, Amazon also announced enhancements to SageMaker this week, specifically designed for managing large language models.

Before the rise of advanced LLMs, task models were the only game in town, necessitating dedicated teams of data scientists for their development. In this new era dominated by LLMs, what remains the role of data scientists? Turow believes they will retain significant responsibilities, even in LLM-centric organizations.

“They will continue to think critically about data, which is a role that is actually expanding,” he asserted. Regardless of the model employed, Turow believes data scientists will play a crucial part in deciphering the connection between AI and data in large organizations. “Each of us must critically examine what AI can and cannot achieve, and the implications of data,” he emphasized. This understanding is vital, irrespective of whether one is developing a generalized LLM or a task-oriented model.

This dual paradigm of task-specific and generalized models is likely to coexist for the foreseeable future. Sometimes, a larger model offers advantages, while other times, a tailored approach proves more effective.

Most people like

Find AI tools in YBX