Arm Adds Machine Learning and Neural Processing IP to Its AI Platform

February 11, 2020 by Gary Elinoff

Arm’s Cortex-M55 and its Ethos-U55 NPU are designed to deliver a combined 480 x leap in machine learning performance.

Arm has announced a new intellectual property (IP) that will provide machine learning on microcontrollers in IoT devices. This new IP joins the ever-expanding universe of Arm Cortex-M tools and software, an integral part of many chips deployed in devices worldwide.


Cortex-M55 and Ethos-U55 processors working together in a smart speaker

Cortex-M55 and Ethos-U55 processors working together in a smart speaker. Image used courtesy of Arm

The two new units, the Cortex-M55 processor and the Ethos-U55 network processing unit, along with their unified toolchain, will allow developers to design countless numbers of tiny, low-power IoT and embedded devices with onboard machine learning (ML) processing capacity.


Advances in IoT, 5G, and AI Technologies

Advances in 5G will make it easier for IoT devices to communicate with the cloud where AI analysis can take place. But, there are advantages to local decision making for these edge devices. 

For one, edge computing is quicker since there is some communication latency on the round-trip from the edge device to the cloud and back. For another, the more “thinking” done locally, the less raw information there is to be transmitted, where it can be intercepted by a malefactor.


Arm's AI processing

Arm's AI processing applies over various use cases. Image used courtesy of Dipti Vachani

This isn't the first time Arm has invested in machine learning processors. Just last year, Arm released IP to bring down the cost of AI for mainstream products

Arm’s new IP is designed with security in mind. This makes it easier for engineers to implement safety into the very core of their designs.

Dipti Vachani, senior vice president and general manager of Arm's automotive and IoT line of business, explains, “Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions, and ultimately trillions of devices.”

She goes on to state that “With these additions to our AI platform, no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications.”


The Arm Cortex-M55 Processor

The Cortex-M processors are said to be part of 50 billion chips deployed by Arm partners. The Cortex-M55 is the first AI-capable member device based on Armv8.1-M architecture with Arm Helium vector processing technology, offering great improvements compared to previous generations of Cortex-M devices.


Block diagram of Cortex-M55

Block diagram of Cortex-M55. Image (modified) used courtesy of Arm

Specifically, Arm claims a five-fold increase in digital signal processing (DSP) performance and a full fifteen-fold improvement in ML power.


The Ethos-U55 Network Processing Unit

The Ethos-U55 microNPU, when paired with the Cortex-M55, can achieve what Arm claims is a 480 x leap in ML performance when compared to MCUs.

The versatile Ethos-U55 is aimed at accelerating ML inference—the process of generating predictions through the use of a trained ML algorithm. It was designed specifically to do so in the limited confines of IoT and embedded devices. Built-in compression techniques are employed to reduce the sizes of ML models and chip power requirements.


Ethos-U55 allows embedded ML inference

Ethos-U55 allows embedded ML inference. Image (modified) used courtesy of (PDF) Arm

The end result is that edge devices can execute neural networks locally with fewer references to the cloud, enabling processing closer to where the device interfaces with the physical world.


What Arm's Partners are Saying

Google: Ian Nappier, Product Manager, TensorFlow Lite for Microcontrollers

“Google and Arm have been collaborating to fully optimize TensorFlow on Arm’s architecture, enabling machine learning on embedded devices for very power-constrained and cost-sensitive applications, often deployed without network connectivity. This new IP from Arm furthers our shared vision of billions of TensorFlow-enabled devices using ML at the endpoint. These devices can run neural network models on batteries for years, and deliver low-latency inference directly on the device.” 


Cortex-M55 and Ethos-U55 processors using the TensorFlow ML framework

The Cortex-M55 and Ethos-U55 processors using the TensorFlow ML framework. Image used courtesy of Arm

NXP: Geoff Lees, Senior Vice President of Edge Processing

“The ‘Empowered Edge’ is becoming a new megatrend, driven by new AI paradigms, as well as the challenges of cloud-based processing like cost, latency, reliability, and privacy. Arm’s new endpoint ML technologies will help NXP’s broad base of microcontroller developers accelerate edge inference in devices limited by size and power.”

STMicroelectronics: Ricardo De Sa Earp, General Manager, Microcontroller Division

“The new Arm Cortex-M55 technology offers the enhanced ML performance and efficiency needed for the next generation of ST microcontrollers. Its support in the STM32Cube. AI tools and ecosystem means this new technology will be simple, fast and optimized to use for all STM32 developers, improving the already wide range of AI applications accessible to them.”