News

AI Startup Lands $27M to Commercialize In-memory Compute at the Edge

November 01, 2022 by Darshil Patel

Axelera AI claims to have concentrated the computational power of an entire server into a single chip at a fraction of the power and price of AI hardware today.

European artificial intelligence startup Axelera AI recently landed $27 million USD in funding to commercialize its AI acceleration platform. The capital will support the launch and mass production of its first-generation AI hardware and software, combining multicore in-memory computing with a custom dataflow architecture.

 

Depiction of Axelera AI's AI acceleration platform

Depiction of Axelera AI's AI acceleration platform. Image used courtesy of Axelera AI

 

The new platform aims to support machine learning at the edge, making AI more efficient and accessible. The tool will help entrepreneurs who want to develop an AI-based product, targeting markets such as industry 4.0.

 

The Origin of Axelera AI

Recognizing that available edge-computing systems were inefficient and expensive, Axelera AI's founding team Fabrizio Del Maffea from imec and Evangelos Eleftheriou from IBM Zurich Lab, along with their colleagues from both institutions, set out to rethink AI hardware at the edge. In July 2021, they founded Axelera AI to commercialize their technology, and in December of the same year, they taped out the Thetis Core, the company's first testbench chip.

Thetis Core delivers a speed of 39.3 trillion operations per second (TOPs/s) with an efficiency of 14.1 trillion operations per watt (TOPs/W) in an area of less than 9 square millimeters. The chip leverages the company's unique in-memory computing technology. Axelera AI has also created software stacks that allow customers to run neural networks on their AI platform without retraining.

This week, Axelera AI received $27 million USD series A capital with participation from imec.xpand and the Federal Holding and Investment Company of Belgium. The company is now developing its Thetis Core chip and the associated software for applications, including security, retail, robotics, and industry 4.0. Moreover, the company plans to expand its research and development offices to the United States and Taiwan this year.

 

The Perks of In-memory Computing for the Edge

Edge computing refers to decentralized data processing or processing on the hardware itself. This approach reduces the communication latency between the hardware and the central IT network. Despite the speed of AI at the edge, these applications require more powerful and efficient hardware architectures to keep latency and power consumption low. 

Current Von-Neumann architectures separate processing and memory units. As Axelera AI's founders noticed, this traditional computing architecture requires a significant amount of data to be moved back and forth between these units, increasing latency and power consumption. Alternatively, in-memory computing architectures process data in the memory itself, slashing the time and power associated with data transfer.

 

Using Crossbar Arrays for In-memory Compute

Axelera AI uses a popular in-memory computing architecture that includes crossbar arrays of memory devices. These crossbar arrays can store and perform the matrix-vector multiplications (MVMs) that are at the heart of all deep-learning operations.

A crossbar array consists of parallel metal lines with the top and bottom electrodes perpendicular to each other. The memory storage element, such as a capacitor or a memristor, connects at the intersection of the lines. The speed of read and write operations depends on the type of memory device. The researchers at Axelera AI report that SRAM, Flash, and all memristor memories are suitable for MVM operations.

 

Illustration of the in-memory computing architecture

Illustration of the in-memory computing architecture. Image used courtesy of Axelera AI

 

The researchers are working on multicore in-memory computing, which depends on multiple characteristics, such as optimized memory hierarchy, well-balanced fabric, fine-tuned quantization flow, optimized weight-mapping strategies, and a versatile compiler and software toolchain. Axelera AI combines its in-memory computing technology with a custom data architecture for high throughput, efficiency, and accuracy.

 

The Thetis Core Chip

With the support of imec.IC-link (a TSMC value chain aggregator), Axelera AI taped out the Thetis Core testing chip to demonstrate the high performance of its in-memory computing.

Released in May 2022, Thetis Core can perform 39.3 TOPs/s with an efficiency of 14.1 TOPs/W at an INT8 precision in less than 9 square millimeters. It demonstrates a peak energy efficiency of 33 TOPs/W. By increasing the clock frequency, the peak throughput can reach 48.16 TOPs, according to Axelera AI.

 

The Thetis Core chip

The Thetis Core chip. Image used courtesy of Axelera AI

 

The company plans to integrate multiple versions of the in-memory computing engine in its first product, which is slated to launch with select customers and partners in early 2023.