News

TinyML Summit: Designing Multi-model Smart HMIs

May 02, 2023 by Jake Hertz

At TinyML Summit 2023, NXP presented an approach to developing smart human-machine interfaces (HMI).

TinyML is arguably one of the hottest and most important fields of research in the electronics industry. TinyML Summit recently made its 2023 sessions available for public viewing. Following our article yesterday, we're back again witth another TinyML Summit story, this time focusing on NXP’s presentation entitled “Designing Multi-Model Smart Human Machine Interfaces with Microcontrollers”, which was delivered by Sriram Kalluri, Product Manager, NXP Semiconductors.
 

Smart HMIs incorporate machine learning into standard HMIs.

Smart HMIs incorporate machine learning into standard HMIs. Image used courtesy from NXP

 

In this article, we’ll take a look at smart HMIs, their challenges, and the contents of NXP’s presentation. 

 

What is a Smart HMI?

Before we can understand the presentation from NXP, it’s helpful to first understand the concept of HMIs and smart HMIs.

A human-machine interface, or HMI, is best defined as the hardware and software through which a human user interacts with a piece of technology. For most modern consumer products an HMI will consist of graphics such as LCDs, as well as technology for controlling the product such as touch screens, mice, and keyboards.

Smart HMIs, on the other hand, seek to augment the functionality of existing HMIs by imbuing them with intelligent features. Often, smart HMIs will work by incorporating machine learning models into HMIs to add functions such as computer vision (CV), automatic speech recognition (ASR), or keyword detection. For example, a smart HMI may be a smartphone that adds facial recognition as a means of unlocking the device.

 

Face recognition is one kind of capability that can be part of a ML-driven Smart HMI.

Face recognition is one kind of capability that can be part of a ML-driven Smart HMI. Image used courtesy of NXP

 

As Kalluri explained in his talk, “A traditional HMI might have a graphics display, and then the interaction is basically through physical keys or through touch. The idea with a smart HMI is that you take the HMI further by incorporating voice control, face recognition, and gesture recognition as well.”

By doing this, smart HMIs make human-machine interaction more intuitive and efficient for the user.

 

Requirements and Challenges with Smart HMIs

Generally speaking, smart HMIs are expected to be employed on battery-powered consumer devices such as smartphones or laptops. Hence, when designing smart HMIs, the main goals are to create something that is simultaneously very high performance, low latency, and feature-rich, but also low power to extend battery life. 

According to Kalluri, the main challenge associated with simultaneously achieving all of these requirements is related to the implementation of machine learning models. He notes “While considering these requirements, a main design challenge that comes up is the implementation and incorporation of machine learning (ML) models. Generally, these require a high initial investment and present a large barrier to entry.” 

Additionally, Kalluri notes that developing feature-rich HMI applications can be very expensive and complex. implementing and incorporating the ML models. 

 

NXP's Solution Leverages ML Models

In their presentation, NXP showed their solution which consists of capabilities including a 5.5-inch LCD display, dual-band 1x1 Wi-Fi 4 + BLE connectivity via the NXP IW416, a 720p RGB image sensor, and a digital microphone. The solution uses a proprietary software pipeline to cohesively integrate multiple ML models into one smart solution. This includes machine vision to support user identification via facial recognition as well as voice recognition and control.
 

High-level block diagram of NXP’s solution.

High-level block diagram of NXP’s solution. Image used courtesy of NXP. (Click image to enlarge)

 

To run all of these algorithms locally and at low power requires a combination of clever software design and efficient hardware. As far as hardware, the solution is centered around an NXP i.MX RT117H crossover MCU features a 1 GHz Arm Cortex-M7 core as well as a 400 MHz Arm Cortex-M4 core.

Kalluri explains the hardware setup, telling the audience “The silicon that we are using for our smart HMI, it's dual-core silicon. We run the vision and voice algorithms on the Cortex M-7, while the Cortex M-4 drives the display and provides the system control.” Importantly, the system does not require any dedicated DSPs or accelerators to run the local algorithms.

 

Making Smart HMIs Simpler

Through their presentation, NXP demonstrated that smart HMIs can be achievable on the edge with a relatively low barrier to entry. Their solution runs entirely on MCUs, without any specialized hardware acceleration, and fully enables a feature-rich HMI that offers facial recognition, audio recognition, as well as graphics and gesture control. With this, NXP hopes to make TinyML-based smart HMIs more accessible and attainable for designers.