Axelera AI Raises $50 Million to ‘Mainstream’ Edge AI

May 31, 2023 by Jake Hertz

Leveraging its proprietary in-memory compute scheme and RISC-V dataflow technology, Axelera AI seeks to “democratize” artificial intelligence (AI).

To many, solving the challenge of high-performance, low-power machine learning computation is considered the most important challenge in computing. When this is attainable, AI can be brought to the edge, where it can be democratized and integrated into more devices that we interact with on a daily basis. 

While edge AI may present itself as a daunting task, there has been significant progress made in this field in recent years. One company, Axelera AI, is particularly noteworthy in this area thanks to its unique edge AI processor offerings. Recently, Axelera AI announced that new investors, who have joined the company's oversubscribed Series A round, have brought the total amount raised to $50 million. This funding will further support its edge AI efforts.

All About Circuits had the chance to interview Fabrizio Del Maffeo, co-Founder and CEO of Axelera AI, to learn more about the company and technology firsthand.


Two Main Edge AI Challenges

According to Axelera, there are two main challenges that make edge AI difficult from a computing perspective. The first challenge is that the vast majority of computation that occurs within a machine learning algorithm is matrix multiplication.

“If you look inside any neural network you can see that 70% to 90% of the compressions are just matrix-vector multiplications,” says Del Maffeo. “So you actually spend a majority of time multiplying tables of numbers, and the rest of the time you’re doing other operations.”


Data movement energy is the largest contributor to ML computation power consumption.

Data movement energy is the largest contributor to ML computation power consumption. Image used courtesy of Axelera


The reason that this poses a challenge is because of the large amount of data required to complete a matrix multiplication. Due to a confluence of factors, including increasing clock frequencies and scaling chip dimensions, the parasitic impedance of chips has increased significantly in recent decades.

The result of this is that the energy required to move data in and out of memory has grown significantly in von Neumann architectures. To further demonstrate this, it has been shown that, in some instances, the power cost of a 32b SRAM read is 25x that of an 8b multiply.


Axelera’s Architecture and Metis

For Axelera, the way to achieve low-power, high-performance edge AI is to directly decrease the contribution of data-movement energy. Hence, as Axelera began architecting its first products in 2019, it chose to architect a digital in-memory computing solution.

“We designed our platform to feature digital in-memory computing—where we take advantage of the fact that SRAM already stores data in a matrix,” Del Maffeo.


”Close to every single element of this memory, we put a small computing unit that does just two things: multiply and accumulate the numbers.”


With this in-memory computation, Axelera’s products—most notably their Metis Processor—are able to perform matrix multiplication efficiently in the memory unit. This effectively removes the need for data movement in and out of memory and results in AI processing that requires less power and achieves higher throughput than von Neumann architectures.


Axelera’s digital in-memory compute architecture.

Axelera’s digital in-memory compute architecture. Image used courtesy of Axelera


“The real benefit of our technology is that we parallelize extremely all the vector-matrix multiplications which means high throughput,” Del Maffeo.


“We don't move the data because the memory and the computing are close by, which means low power consumption and a smaller footprint. And with this, we solve this matrix 70% to 90% of the competition of a neural network.”


The Metis Processor interacts with an external host.

The Metis Processor interacts with an external host. Image used courtesy of Axelera


Notably, the Axelera architecture features a RISC-V controlled dataflow technology, where an external host interacts with the world and runs application firmware. In this architecture, the Metis Processor, for example, is used explicitly for the storage and computation of machine learning algorithms. 

The result is that the Metis Processor is capable of 15 TOPS/W energy efficiency, 50+ TOPS/core, and 214 TOPS/chip.


New Funding

Axelear claims that the new funding will be used to help the company expand, including pushing forward mass production of its AI platform. Reflecting on the funding, Del Maffeo says he’s ready to scale up.


“We raised $50 million in 21 months, which I think is a testament to what we are doing. Now, we are scaling up everything and raising more money to expand operations.”