AI pioneer Yann LeCun ignited a lively discussion today, advising the next generation of developers to steer clear of large language models (LLMs).
“This work is dominated by large companies; there’s little for you to contribute,” LeCun stated at VivaTech in Paris. “Focus instead on next-gen AI systems that transcend the limitations of LLMs.”
LeCun's remarks, as Meta’s chief AI scientist and NYU professor, quickly prompted inquiries about the shortcomings of current LLMs. When questioned further on X (formerly Twitter), he elaborated, “I’m developing next-generation AI systems, not LLMs. I'm essentially suggesting, ‘compete with me!' The more minds tackling this, the better!”
Despite his call to action, many users sought clarity on what constitutes “next-gen AI” and potential alternatives to LLMs.
Developers, data scientists, and AI specialists shared a range of ideas on X, including boundary-driven AI, discriminative AI, multi-tasking, multi-modality, categorical deep learning, energy-based models, purposive small language models, niche use cases, custom fine-tuning, state-space models, and hardware for embodied AI. Some users suggested exploring Kolmogorov-Arnold Networks (KANs), a promising advance in neural networking.
One user outlined five next-gen AI systems:
- Multimodal AI
- Reasoning and general intelligence
- Embodied AI and robotics
- Unsupervised and self-supervised learning
- Artificial General Intelligence (AGI)
Another advised that every student should master the fundamentals, such as:
- Statistics and probability
- Data wrangling and transformation
- Classical pattern recognition (e.g., naive Bayes, decision trees, random forests)
- Artificial neural networks
- Convolutional neural networks
- Recurrent neural networks
- Generative AI
Conversely, some argued that the timing is ideal for students to engage with LLMs, as applications are largely unexplored. There remains much to learn about prompting, jailbreaking, and accessibility.
Critics also pointed to Meta's extensive LLM development, suggesting that LeCun's statements were aimed at stifling competition. As one user quipped, “When the head of AI at a large company says, ‘don’t compete,’ it makes me want to compete.”
LeCun, a proponent of objective-driven AI and open-source systems, stated in a recent Financial Times interview that LLMs lack logical reasoning and will never attain human-level intelligence. “They do not understand the physical world, lack persistent memory, cannot reason meaningfully, and cannot plan hierarchically,” he asserted.
Meta recently introduced its Video Joint Embedding Predictive Architecture (V-JEPA), designed to recognize and understand intricate object interactions. This innovation aligns with LeCun’s vision for advanced machine intelligence (AMI).
Many industry experts echo LeCun’s sentiment regarding the limitations of LLMs. The AI chat app Faune described his insights as “awesome,” highlighting how closed-loop systems face significant rigidity. “The creator of an AI that can learn and adapt like a human will likely earn a Nobel Prize,” they stated.
Others pointed out the industry's “overemphasis” on LLMs, deeming them a dead end for true advancement. Some have even labeled LLMs as mere connective tools that efficiently link systems, akin to telephone switch operators.
LeCun is no stranger to controversy. He has engaged in intense debates with fellow AI pioneers Geoffrey Hinton, Andrew Ng, and Yoshua Bengio over the existential risks posed by AI, often arguing that these concerns are exaggerated.
One commentator recalled a recent interview with Hinton, who championed an all-in approach to LLMs, asserting a close correlation between human and AI brains. “It’s fascinating to observe such a fundamental disagreement,” the user remarked.
This clash of perspectives is unlikely to be resolved anytime soon.