Three Trends in Computing Hardware Driving the Edge

March 01, 2022 by Jake Hertz

How will edge computing design evolve over the next few years? In an interview with All About Circuits, NXP discussed the trends they're seeing in this space.

With the advent of IoT and the proliferation of small battery-powered devices, edge computing has become a dominating field in the electronics industry. The new requirements demanded by these low-power, high-performance edge devices have caused engineers to reimagine computing and device management.


Edge device design tradeoffs

Edge device design tradeoffs. Image used courtesy of NXP


In a recent e-book, NXP lays out the fundamentals of edge computing as well as the trends for the future of edge design—which acts as a middle ground between cloud computing and embedded systems. All About Circuits had the chance to speak with Robert Oshana, NXP's VP of software engineering R&D, to discuss what designers should know about the future of edge design.


Trend #1: Single Core to Multi-core

One of the biggest conundrums of edge computing is simultaneously achieving high performance for intensive applications like ML while also minimizing power consumption for battery-powered devices. Historically, engineers have relied on Moore’s law and increased clock frequencies to achieve better performance and lower power—but that is no longer the case.

At the edge, higher frequency means more power consumption; engineers must find new ways to compute to save power. One popular solution for low-power applications is energy harvesting. Another remedy more common, however, has been the shift from a single-core architecture to a multi-core architecture.


Single-core vs. multi-core processor architecture

Single-core vs. multi-core processor architecture. Image used courtesy of Curtiss Wright


“You can't keep cranking up the frequency of a device, especially in an embedded system or wearable. So in the last decade, we stopped using the single-core, high-frequency architecture. Instead, we go on the multi-core and turn the frequency down, meaning less power,” Oshana explained. “You're still getting the job done, but the programming model changes. You have to spread the computation over multiple courses.”

The idea is that using multiple parallel cores at a lower frequency can achieve the same computational performance as a single core at a higher frequency. The difference, of course, is that multi-core architectures can achieve lower power consumption for the same performance.


Trend #2: A Shift Toward Accelerators 

Another emerging trend in edge computing is the increased use of hardware accelerators.

General-purpose processing units are simply not power-efficient enough for unique computing tasks like machine learning, especially as Moore’s law slows. Instead, researchers and engineers realized that by using hardware accelerators, which are optimized for a single task (e.g. multiply and accumulate), they can achieve better performance and power. 


An edge computing MCU architecture consists of a variety of application-specific hardware blocks

An edge computing MCU architecture consists of a variety of application-specific hardware blocks. Image used courtesy of NXP

“In some form or fashion, three things must exist within edge computing: performance, memory, and power,” Oshana remarked. “Hardware accelerators, like a machine learning accelerator or a GPU or a DSP perform operations much more quickly so that you can optimize performance. Machine learning has its own set of complicated algorithms that need hardware to accelerate it. It needs compilers to optimize the algorithms as well.”

Oshana said the move toward machine learning accelerators today compares to the introduction of digital signal processing (DSP) in the 1990s. 

“Digital signal processors with very long instruction word architectures started to become commercialized, and then came along the optimizing compilers that went along with them,” Oshana said. “In machine learning, I see the exact same thing happening. It's got its own set of complicated algorithms that needs hardware to accelerate it. It needs compilers–in this case, compilers like Glow—to optimize the algorithms as well. You'll see this continue to evolve in the exact same way DSP did.”

A challenge with this trend, however, is that there is currently not a consensus on the best types of hardware accelerators. For this reason, the industry is rife with hundreds of startups offering their unique approach to hardware acceleration. NXP foresees that a handful of these startups will eventually win out and the industry will consolidate to focus on a few kinds of accelerators. 


Trend #3: Secure Device Lifecycle Management

A third trend that is occurring at the edge is the increased importance of device lifecycle management—especially in regards to security. IoT devices are often designed to be deployed remotely in a field, ideally without human intervention for years on end. This can complicate remote device management.

“You may want that device to connect to the cloud—let's say to make a software update to a device in the field. Over ten years, you may want to update the software once a year with over-the-air updates. You have to do that in a secure way.”

The solution to this challenge can start with hardware components before deployment. “Hardware blocks allow you to store private keys and communicate. It also includes different forms of secure software to handle the communication and the connectivity.”

Some specific hardware security features include secure boot and secure processing. Secure protocols for wireless and cloud communications are also on the rise. To decommission an edge node no longer in use, all proprietary or private information must be wiped. 


SoC lifecycle

SoC lifecycle. Image used courtesy of NXP

Oshana explained, “There are ways in hardware where we could essentially zero out RAM and ROM, so you cannot get access to any code on the device or any firmware.”

“For device lifecycle management, you need various levels of security,” Oshana said. “Sometimes this is referred to as the ‘defense-in-depth model,’ where you've got different levels of security. What it includes is things like secure boot, secure processing, and a TrustZone architecture and a Secure Enclave [in NXP's case] on the device.”

He continued, “When you're communicating to, say, the cloud, you're doing that using secure protocols. And there's a security model in place for every part of that lifecycle of the device. That's what we call device lifecycle management.”


The Future of Edge Design is a Future in Flux

For an engineer entering this space, it will become important to be able to handle a dynamic and changing field. Learning skills such as hardware/software security, heterogeneous computing, and low-power electronics are undoubtedly going to be crucial for future edge hardware engineers. Beyond this, future engineers should be prepared to be fast learners as the field is rapidly evolving and could look very different in a year’s time.

Emerging IoT protocols like Matter may also be standardized in favor of disparate protocols like Bluetooth and Zigbee. Oshana also predicts that new protocols for high-power applications like 5G will be more readily accepted. For instance, a protocol dubbed Red Cap (reduced capability) New Radio may further develop the landscape of NR devices and generate more 5G use cases. 

Oshana concluded, “Edge computing is going to grow because we can't do everything in the cloud anymore. In order to optimize processing at the edge, we need to evolve the way we build hardware and software systems.”

He added, “Edge technology communicates upwards to the cloud. It communicates downwards to the endnotes. These small little devices sit right in the middle, so it's got to orchestrate both ways. As a result, it's got unique challenges.”