News

Researchers Enhance Natural Movement in Robotics Using AI

June 04, 2020 by Luke James

Those using robotic prosthetics could soon move more naturally following the development and integration of AI-backed technologies by U.S. researchers.

Researchers at North Carolina State University (NC State) have developed a new framework that incorporates computer vision into prosthetic leg control and includes artificial intelligence (AI) algorithms that allow the software to account for uncertainty.

The new technology, which can be integrated with existing hardware, could enable people to use robotic prosthetics to walk in a safer and more natural manner. 

 

Reliable Environmental Context Prediction

Reliable environmental context prediction is critical for wearable robotics solutions, such as prosthetics, to assist with terrain-adaptive locomotion. This is because lower-limb robotic prosthetics need to execute different behaviors depending on where the user is walking and what the terrain is like.  

“The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making,” said Edgar Lobaton, co-author of the team’s paper and associate professor of electrical and computer engineering at NC State. 

In their research, the NC State team focused on distinguishing between six key terrains that require adjustments in prosthetic behavior—tile, brick, concrete, grass, ‘upstairs’, and ‘downstairs.’ “If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision – it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” said lead author Boxuan Zhong. 

 

Environmental terrain shown by AI.
A collection of images depicting the systems and the various terrains where it has been tested. Image credited to Edgar Lobaton

 

A ‘Significant’ AI Advancement

The research team designed their “environmental context” framework for use with any type of lower-limb prosthetic and supplemented it with cameras that were worn on eyeglasses and mounted on the lower-limb prosthetics. Evaluations were then carried out to see how the AI was able to make use of computer vision data from both types of cameras both separately and when used in tandem. 

“Incorporating computer vision into control software for wearable robotics is an exciting new area of research,” said Helen Huang, a co-author of the paper. “We found that using both cameras worked well but required a great deal of computing power and may be cost-prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two.”

According to the researchers, the most significant part of their research is the advances made to the AI itself. This is because they devised a “better way” to teach deep-learning systems on how to evaluate and quantify uncertainty in a way that enables the system to incorporate uncertainty into its decision making.

 

A New AI Training Model 

To train the AI system, the researchers connected the cameras to able-bodied individuals who then walked through different indoor and outdoor environments. A proof of concept evaluation was then carried out by having an individual with a lower-limb amputation wear the cameras while walking through the same environments. 

“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” Lobaton said. “That means that the AI worked well even though it was trained by one group of people and used by somebody different.”

The team plans to make the system more efficient in terms of requiring less data input and processing, however, it has yet to be tested in a robotic device.