Enfabrica Secures $125M Funding to Enhance AI Workload Networking Hardware

Enfabrica, a pioneering company specializing in networking chips optimized for AI and machine learning tasks, announced today that it has successfully raised $125 million in a Series B funding round. This latest funding elevates the company's valuation to five times its post-money valuation from the Series A round, as indicated by co-founder and CEO Rochan Sankar.

The Series B round, led by Atreides Management, also saw contributions from Sutter Hill Ventures, Nvidia, IAG Capital Partners, Liberty Global Ventures, Valor Equity Partners, Infinitum Partners, and Alumni Ventures. With this round, Enfabrica's total funding has now reached $148 million. Sankar stated that these funds will enhance the company’s R&D and operational capabilities while allowing for growth in engineering, sales, and marketing teams.

"It's remarkable that Enfabrica attracted such a significant investment during a challenging funding climate for chip startups and deep tech overall. This achievement distinguishes us from many peers in the chip industry," Sankar remarked. "As generative AI and large language models catalyze unprecedented infrastructure demands in cloud computing across various sectors, solutions like ours are poised to address the growing need for advanced networking technologies."

Emerging from stealth mode in 2023, Enfabrica began its journey in 2020 when Sankar, a former engineering director at Broadcom, collaborated with Shrijeet Mukherjee, former head of networking platforms at Google. They founded Enfabrica to tackle the increasing demands of the AI industry for versatile, accelerated, and heterogeneous infrastructure, particularly involving GPUs.

"We believe that networking silicon and systems must adapt to this paradigm shift to support massive-scale compute infrastructure," Sankar explained. "The main challenge posed by the ongoing AI revolution is scaling AI infrastructure regarding both computing costs and sustainability."

With Sankar serving as CEO and Mukherjee as Chief Development Officer, Enfabrica has assembled a team of talented engineers from leading companies such as Cisco, Meta, and Intel. They have designed a unique architecture for networking chips that meets the input/output and memory movement requirements necessary for parallel workloads like AI.

Sankar contends that traditional networking chips, including switches, are ill-equipped to handle the data flow demands of contemporary AI workloads. Training models like Meta’s Llama 2 and GPT-4 requires vast datasets, and network switches can create significant bottlenecks during this process.

"A substantial part of the scaling issue and bottleneck in the AI sector is linked to the I/O subsystems, memory movement, and the networking connected to GPU compute,” he observed. “There’s an urgent need to align the rising AI workload demands with the overall costs, efficiency, sustainability, and scalability of the compute clusters that support them."

In its mission to deliver cutting-edge networking hardware, Enfabrica emphasizes parallelizability. The firm’s flagship product, the Accelerated Compute Fabric Switch (ACF-S), offers data movement capabilities of multi-terabits per second among GPUs, CPUs, as well as AI accelerators and other networking devices. With standards-based interfaces, this hardware can scale to tens of thousands of nodes, reportedly reducing GPU compute needs for large language models like Llama 2 by approximately 50% without sacrificing performance.

"Enfabrica’s ACF-S devices enhance the capabilities of GPUs, CPUs, and accelerators by providing efficient, high-performance networking and memory solutions within data center server racks," Sankar explained. "The ACF-S serves as a converged solution that eliminates the necessity for traditional disparate server networking chips."

The ACF-S networking hardware is designed to maximize efficiency for companies that focus on inferencing—running trained AI models—by significantly optimizing data movement. This allows organizations to minimize their reliance on numerous GPUs, CPUs, and other AI accelerators. "Our ACF-S is processor-agnostic and supports various AI computation engines and models, enabling flexibility across numerous use cases and facilitating collaboration with multiple processor vendors, free from proprietary constraints," he added.

While Enfabrica has made substantial progress, it faces competition from other startups in the networking chip arena. Recently, Cisco unveiled its Silicon One G200 and G202 hardware aimed at supporting AI networking workloads. Established players like Broadcom and Marvell offer switches capable of delivering up to 51.2 terabits per second, with Broadcom also launching the Jericho3-AI high-performance fabric to connect up to 32,000 GPUs.

Though details about specific customers remain under wraps—given the company’s early stages—Sankar emphasized that part of the new funding will facilitate production and market entry efforts. He maintains that Enfabrica is well-positioned amid the current enthusiasm for AI infrastructure, as significant investments in this area continue.

According to the Dell’Oro Group, AI infrastructure investments will push data center capital expenditures beyond $500 billion by 2027. Furthermore, IDC forecasts a compound annual growth rate of 20.5% in investment for AI-focused hardware over the next five years.

"The cost and power efficiency of AI compute—whether on-premises or in the cloud—should be a primary concern for every CIO, C-suite executive, and IT organization deploying AI services," he emphasized. "Despite the economic headwinds affecting the tech startup landscape since late 2022, Enfabrica has advanced its funding, product development, and market positioning through innovative and disruptive technology in networking and server I/O chips, capitalizing on the tremendous market opportunity presented by the surge in generative AI and accelerated computing in recent months."

Based in Mountain View, Enfabrica currently employs over 100 team members across North America, Europe, and India.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles