Symbolica Aims to Prevent AI Arms Race with Strategic Focus on Symbolic Models

In February, Demis Hassabis, CEO of Google's DeepMind AI research lab, cautioned against the trend of simply increasing computational power for the AI algorithms commonly in use today, citing the risk of diminishing returns. To reach the “next level” of AI, he emphasized the necessity for groundbreaking research that provides viable alternatives to existing methodologies. Ex-Tesla engineer George Morgan echoes this sentiment, having launched his startup, Symbolica AI, with the mission to redefine AI capabilities.

“Traditional deep learning and generative language models demand immense resources — in terms of scale, time, and energy — to produce meaningful results,” Morgan explained. “By developing novel models, Symbolica can achieve superior accuracy with diminished data needs, reduced training times, and lower costs while ensuring correct structured outputs."

Morgan, who left college at Rochester to join Tesla’s Autopilot team, realized that scaling current AI methods would not be sustainable for long-term progress. “Contemporary approaches focus on one main solution: increase scale and wish for emergent behavior,” he stated. “However, this means needing more compute, memory, funding, and data, which eventually does not yield proportional performance enhancements."

Morgan is not alone in his assessment.

This year, executives at TSMC, the leading semiconductor manufacturer, released a memo suggesting that if the AI trend continues at its current rate, the industry could require a chip with 1 trillion transistors — ten times the average chip's transistor count — within the next decade. Whether such technology is feasible remains uncertain.

Additionally, a recent report co-authored by Stanford and Epoch AI, an independent AI research institute, reveals that the costs involved in training advanced AI models have escalated recently. They estimate that OpenAI and Google have spent approximately $78 million and $191 million, respectively, for training their respective GPT-4 and Gemini Ultra models.

With expenses expected to rise further, as seen in OpenAI’s and Microsoft’s rumored plans for a $100 billion AI data center, Morgan began researching “structured” AI models. These models encapsulate the data's inherent structure, allowing for enhanced performance with less computational demand compared to traditional approaches, which often approximate insights from massive datasets.

“It’s feasible to create domain-specific structured reasoning capabilities within smaller models,” he stated, “leveraging a deep mathematical toolkit alongside breakthroughs in deep learning.”

While symbolic AI is not a novel concept — having been around for decades and rooted in principles that represent knowledge through symbols and rules — Symbolica aims to combine the strengths of both symbolic and neural network methodologies. Traditional symbolic AI relies on predefined sets of rules for specific tasks, whereas neural networks utilize statistical learning from examples, providing a different approach. Morgan argues that blending these two methods could enhance efficiency and performance in AI systems.

Neural networks underpin powerful AI applications like OpenAI’s DALL-E 3 and GPT-4. Yet, Morgan asserts that mere scaling isn't the sole solution; integrating mathematical abstractions with neural networks may lead to more effective knowledge encoding, enabling better reasoning in complex scenarios while elucidating the rationale behind their conclusions.

“Our models prioritize reliability, transparency, and accountability,” Morgan emphasized. “There are vast commercial possibilities for structured reasoning capabilities, particularly in code generation, where current solutions often fall short.”

Symbolica, which operates with a 16-person team, is developing a toolkit for creating symbolic AI models and pre-trained models for specific applications like code generation and theorem proving. Though their business model is still evolving, Morgan indicated that Symbolica may assist enterprises in developing customized models tailored to unique requirements.

“Symbolica will collaborate closely with large enterprise partners to design custom structured models with advanced reasoning capabilities,” he noted. “They’ll also create and offer state-of-the-art code synthesis models for major enterprise clients.”

Symbolica officially launched this week, and while it currently has no public clients, Morgan disclosed that the startup secured $33 million in funding earlier this year, led by Khosla Ventures, with additional investments from Abstract Ventures, Buckley Ventures, Day One Ventures, and General Catalyst.

The $33 million investment signifies strong backing from prominent investors who are optimistic about Symbolica’s vision. Vinod Khosla, founder of Khosla Ventures, stated, “Symbolica is addressing critical challenges in the AI landscape. For large-scale AI adoption and regulatory compliance, we require models with structured outputs that ensure increased accuracy while using fewer resources. George has assembled one of the most talented teams in the industry to tackle these challenges.”

However, some experts remain skeptical about the future of symbolic AI. Os Keyes, a PhD candidate at the University of Washington specializing in law and data ethics, warns that symbolic AI relies on highly structured data, making it potentially fragile and heavily dependent on specific contexts. Well-defined knowledge is essential for its function, and cultivating that knowledge can be labor-intensive.

“This approach may prove valuable if it successfully fuses the advantages of deep learning with symbolic techniques,” Keyes remarked, referencing DeepMind's AlphaGeometry, which combines neural networks with a symbolic approach to tackle complex geometry problems. “Only time will determine its effectiveness.”

Morgan countered by asserting that impending limitations in current training methods necessitate exploring promising alternatives like Symbolica. With several years of funding available, he believes the startup is strategically positioned. Its smaller models are cost-effective to train and deploy, making them viable for corporate applications.

“Automating software development at scale will demand models that possess formal reasoning capabilities and lower operational costs to process extensive codebases and efficiently generate and iterate on effective code,” he explained. “The prevailing belief that ‘scale is all you need’ in AI models is misleading. Adopting a symbolic perspective is crucial for progress; structured, explainable outputs with formal reasoning capabilities are essential to meet future demands.”

While the competitive landscape of AI remains robust, with major players like DeepMind capable of developing their own symbolic or hybrid models, Morgan remains confident in Symbolica's growth trajectory, anticipating that its San Francisco-based team could double by 2025.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles