News

EnCharge AI Emerges from Stealth With New Take on In-memory Compute

December 21, 2022 by Chantelle Dubois

The startup is touting highly efficient AI computing that leverages in-memory computing and eases integration with AI frameworks.

EnCharge AI has recently announced a successful Series A Financing round to advance their AI hardware accelerators with $21.7 million secured from investment firms Anzu Partners, AlleyCorp, Scout Ventures, Silicon Catalyst Angels, Schams Ventures, E14 Fund, and Alumni Ventures. 

Encharge AI is promising high efficiency with test chips achieving 150 TOPS/W for 8-b compute, seamless hardware-software integration with popular AI frameworks like PyTorch and TF, and 20x higher performance-per-Watt with 14x higher performance per dollar relative to comparable AI accelerators. 

 

EnCharge AI founders, from left to right, Kailash Gopalakrishnan, Ph.D., Echere Iroaga, Ph.D, and Naveen Verma, Ph.D.

EnCharge AI founders, from left to right, Kailash Gopalakrishnan, Ph.D., Echere Iroaga, Ph.D, and Naveen Verma, Ph.D.

 

The company got its start as the recipient of R&D funding via DARPA’s 2017 Electronics Resurgence Initiative (ERI). The initiative’s objective is to advance microelectronic design within the US and improve security and access to leading-edge electronics for the defense industry, and the Department of Defense. 

“Accelerating innovation in artificial intelligence hardware to make decisions at the edge faster” was among the focuses of the ERI’s investments.

The team leading Encharge AI includes:

  • Naveen Verma, professor of electrical and computer engineer at Princeton University whose research focus is on emerging technologies
  • Kailash Gopalakrishnan, who was an IBM fellow leading global efforts on AI Hardware and software
  • Echere Iroaga, former vice-president and general manager of MACOM’s connectivity business unit

 

In-memory Computing

In-memory Computing (IMC) appears to be a key element to EnCharge AI’s ability to deliver on its promises of efficiency and low-power. The company lists four publications on their website from 2019, 2020, and 2021 that demonstrate the evolution of their research on improving IMC for use in AI acceleration. 

Their earliest publication identifies that machine-learning computation relies heavily on matrix-vector multiplication (MVM), and that while digital accelerators were providing 10-100x improvements on energy efficiency and speed relative to general processors, the gains were made mostly in computation and not memory access, a so-called “memory wall”. Moving data to-and-from memory continued to have a high cost in both energy and time. 

 

The programmable, bit-scalable IMC architecture: the (a) heterogeneous microprocessor architecture, and (b) software libraries for neural-network training and inference.

The programmable, bit-scalable IMC architecture: the (a) heterogeneous microprocessor architecture, and (b) software libraries for neural-network training and inference. (Click image to enlarge)

 

However, IMC had a trade-off: while energy and latency can be reduced, the trade-off was worse signal-to-noise ratio (SNR) when reading memory bit lines. The SNR issue presented challenges in scaling IMC in heterogeneous systems, most likely used in real-world applications.

In 2020, the research team worked towards resolving the SNR problem, and produced a programmable, heterogeneous architecture and accompanying software stack that takes advantage of charge-domain IMC. The prototype was based on 65 nm CMOS.

 

Shown here is the prototype system. Die image Image on the microprocessor in 65 nm CMOS (left). PCB for chip testing and application demonstration (right).

Shown here is the prototype system. Die image Image on the microprocessor in 65 nm CMOS (left). PCB for chip testing and application demonstration (right).

 

In 2021, the team introduced capacitor-based analog computation to improve the dynamic range from binary vector input to 5-b vector inputs, and made advances in co-designing algorithms to improve memory mapping.

 

The Path Ahead

EnCharge AI, while promising, reportedly does not have customers lined up yet. Additionally, there are competing companies already making comparable promises with significant funding. 

One such example is European company Alexera AI which announced $27 million in Series A funding in Oct. 2022. Like EnCharge AI, In-memory Computing and popular AI framework support is featured heavily in their claims. Alexera AI also has development kits available for purchase, and their Voyager SDK available for early access.

However, unlike EnCharge AI, Alexera AI claims 15 TOPS/W efficiency, compared to EnCharge AI’s promise of 150 TOPS/W.

 

All images used courtesy of EnCharge AI