The next-generation voice-controlled HMIs are employing audio-optimized architectures and powerful algorithms to ensure speaker authentication and contextual awareness.

Human-machine interface (HMI) technology is going through a makeover of sorts in order to accommodate new markets like automotive, smart home, and industrial automation after having transformed the mobile industry with touchscreen iPhone and iPad.

Take vehicles, for instance, where drivers navigating through a touchscreen will inevitably cause a distraction. And industrial machines, mostly operating with outdated front panels, are looking for HMIs that are highly graphical and interactive. The connectivity will allow industrial operators to safely access machines, even remotely.

A number of technologies are being considered for implementing the intuitive HMI solutions beyond the prevalent touchscreens. For example, Cypress Semiconductor is adopting an intuitive and precise handwriting recognition technology for automotive HMI designs.

The chipmaker is tying its Traveo family of microcontrollers with MyScript's handwriting input technology to allow drivers to either quickly write on a touchscreen or simply gesture with fingertips. Cypress offers capacitive touchscreen and touchpad solutions based on its Traveo MCU solutions.

 

Voice-controlled HMIs are taking off in smart home applications. Image courtesy of Infineon Technologies AG.

 

However, when it comes to the next-generation user interface, the voice-controlled HMI technology is probably making the most noise. But it's easier said than done. For a start, it requires significant improvements in gesture recognition, voice recognition, and touchless sensing technologies.

 

Voice-Controlled HMI

There are two key building blocks when it comes to intuitive HMIs based on high-performance voice control: intelligent human-sensing microphones and gesture recognition chips that can differentiate speech from different sources in different physical environments.

The first part in the voice control design—microphone sensors detecting the position and distance of the speaker from the microphones—is now well positioned to address challenges like speaker authentication or contextual awareness.

It's the second part—voice controllers that capture the speech—which is proving trickier. Is it the speech from a person in the room or is it from a synthesized source such as radio or television? Voice controllers often pick the voice of interest based on the loudest noise.

So, like Cypress Semiconductor, Infineon Technologies, a major supplier of chips for HMI designs, has joined hands with a developer of audio algorithms for voice processing, biometrics, and artificial intelligence.

 

The voice DSP captures microphone signals and then isolates specific voice content. Image courtesy of XMOS Ltd.

 

Infineon has recently made a strategic investment in XMOS Ltd, a fabless semiconductor firm based in Bristol, England. XMOS claims to have developed a silicon architecture and highly differentiated software that allows HMIs to focus on a specific voice in a crowded audio environment.

The above examples of strategic relationships that HMI chipmakers are establishing with voice and gesture recognition solution providers just show how the HMI compute power is evolving in the IoT era.

So a lot more is going to change in the way HMI solutions capture, convey and control the information flow. Stay tuned for more HMI innovations in the coming years.

 

Comments

0 Comments