AI Networking Chip Startup, Enfabrica, Receives $125 Million in Series B Funding

September 15, 2023 by Duane Benson

With Nvidia now added to its list of strategic investors, Enfabrica has opened pre-orders on systems using its AI era fabric switch IC

Hardware venture capital activity for 2023 continues with a $125 Million series B round for AI Infrastructure networking startup, Enfabrica. The company has been busy since it came out of stealth mode back in March. Along with the funding announcement, Enfabrica, announced opening pre-orders for its 8-Terabit/s ACF-S Switch Systems. 

Moore’s Law states that the number of transistors in an integrated circuit, and by inference computing performance, doubles about every two years. In the real world, the performance increases are tempered by the corollary that computing-power demand also doubles. With the recent explosion of generative AI, and the accompanying increase in computing power needs, that corollary is in full force.

That’s where founders of the startup, Enfabrica hope to make their mark. Their Accelerated Compute Fabric Switch (ACF-S) is designed to break bandwidth limitations in switching technology between CPU, GPU, memory and backbone. All About Circuits recently spoke with Enfabrica CEO and co-founder, Rochan Sanker about their product, funding and future.


While Nvidia’s GPU technology is a part of Enfabrica’s story, Nvidia has also joined as a strategic investor.

While Nvidia’s GPU technology is a part of Enfabrica’s story, Nvidia has also joined as a strategic investor. Image used courtesy of Enfabrica


Oversubscribed in a Difficult Funding Environment

Enfabrica’s product is arriving at just the right time. As an enabling technology for AI computing systems, its arrival promises to be a valuable solution for AI infrastructure scaling. The $125 million round gives Enfabrica five times the valuation it had after closing its Series A.

That’s a remarkable increase given the tight capital market at the moment. The oversubscribed Series B round was led by Atreides Management and included funds from Sutter Hill Ventures, and Nvidia.

Nvidia has a lot to gain with this investment. Their GPUs are heavily used for AI processing and demand is only expected to increase with further cloud AI moves. With the Enfabrica technology leading to improved power utilization from networked accelerators, Nvidia can deliver even more capability to computer-power hungry server farms.


Solving Data Bottleneck Without New Layers or APIs

According to Sanker, in the past computing architecture was largely built around the central processing unit—the CPU. The CPU was in charge and quarterbacked the whole process. Today, however, especially with AI driven advanced computing needs, much of the computing power comes in the form of accelerators; graphics processing units (GPU) and tensor processing units (TPU). The GPUs largely come from Nvidia and the TPU is Google’s AI accelerator.


Accelerated Compute Fabric (ACF) Switch topology.

Accelerated Compute Fabric (ACF) Switch topology. Image used courtesy of Enfabrica


The amount of data that a GPU or TPU can process today far exceeds the ability to get that data in or out of the accelerator. That means that I/O becomes the limiting factor. If you just add more AI GPU chips, you end up with an underutilized accelerator. It’s still drawing electrical current, but you can’t fully utilize the processing power.


Traditional memory bound AI vs. Hub-and-Spoke dynamic dispatch.

Traditional memory bound AI vs. Hub-and-Spoke dynamic dispatch. Image used courtesy of Enfabrica. (Click image to enlarge)


Enfabrica’s core product, the Accelerated Compute Fabric Switch (ACF-S), is designed to tackle the bottleneck at the interconnect level without adding new layers or APIs. “That's the fundamental differentiation for Enfabrica,” said Sanker. “We're taking what used to be or what is today a 100 Gbit/s or 200 Gbit/s element in the network interface controller and we're making it an 8 Terabit/s element.”

The ACF-S connects to all the processing resources in a flat hub-and-spoke like topology, whether it's a CPU, GPU, TPU, memory, or storage, and it flattens it and networks it out, giving the processing units full access to all of the data that they need to get in and out.


Building Scalability for AI and Other High-Compute Needs

The need for the Enfabrica product becomes even clearer when looking beyond just the computing component to the industry as a whole. Sanker went on to explain that the industry is already seeing physical footprint limits. Yet a GPU consumes 3× the power of a CPU so the GPU-heavy  AI capable system rack space footprint ends up being 3× to 4× a traditional CPU-based system.

Sanker says that Enfabrica equipment effectively cuts the I/O portion of the power by 50%.


“It can lead to reducing an AI level compute system with many GPUs from something close to 4 kW to about 2 kW, and that's in every single rack this is deployed in.”


The increase in utilization can allow for a reduction in the number of devices too. For example, instead of twelve rack devices dedicated to I/O purposes, with ACF-S it could go down to two to three devices.


Newly announced Enfabrica ACF-S system.

Newly announced Enfabrica ACF-S system. Image used courtesy of Enfabrica


The Need is Bigger Than One Player

With series B funding in place, Enfabrica will be expanding R&D, engineering, operations and sales and marketing. Enfabrica but Enfabrica is not the only company targeting AI infrastructure hardware. Companies such as Cisco, Broadcom, and Marvell are expanding their I/O capacity as well.