News

Improving Big Data Processing with Magnetic Circuits

April 26, 2020 by Luke James

Researchers at the University of Texas’ Cockrell School of Engineering claim to have found a way to make the next generation of smart devices and computers, which will rely on big data processing, more efficient.

Our growing demand for increasingly smart technologies has led to a huge increase in energy usage which is needed to process the huge amount of data generated by electronic devices.

However, Texas researchers claim to have found a way to make “smart computers” more energy efficient by using magnetic components to form the building blocks of computers and electronic devices instead of silicon chips.

The team’s research, which was published in the journal IOP Nanotechnology, lays out new information about how the physics of magnetic components can reduce energy consumption and the requirements of training the algorithms that are needed for big data processing.

These are extremely energy-intensive, but the Cockrell team claims that their work can “help reduce the training effort and energy costs” associated with them. 

 

Reducing Energy Consumption

The research findings describe how Jean Anne Incorvia, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering, working with second-year grad student Can Cui, to discover that by spacing nanowires in certain ways naturally increases the ability of the artificial neurons to compete against one another, with the most activated ones coming out on top.

This is known as “lateral inhibition” and traditionally requires extra circuitry within computers but was achieved by spacing where the spaces acting as artificial neurons. Incorvia claims that this method provides a reduction in energy up to 20—30 times the amount used by a standard back-propagation algorithm when performing the same machine learning tasks.

 

A diagram showing the interaction between input and output neurons.

Diagram provided by the University of Texas at Austin researchers showing their manipulation of neurons to maximize lateral inhibition. Image used courtesy of the University of Texas - Austin

 

Applying the Findings to Larger Sets of Multiple Neurons

In the research paper, Incorvia goes on to explain that the way computers operate is “fundamentally changing”. One of a few promising trends is the concept of neuromorphic computing, an area of research focusing on designing computers that can think like human brains.

Instead of processing tasks one at a time, they can simultaneously analyze huge data sets, with some believing it is the key to major advances in artificial intelligence and machine learning.

Lateral inhibition, the capacity of an excited neuron to reduce the activity of its neighbors, is an important functionality in neuromorphic computing. In human neurobiology, it disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction.

 

Advancing the Research 

In neuromorphic hardware platforms such as computers, lateral inhibition is achieved by external circuitry, decreasing the energy efficiency and increasing the footprint of such systems.

It is these which Incorvia’s team hopes to solve by maximizing lateral inhibition in paired magnetic domain wall racetracks by tuning magnetic interaction between a pair of adjacent DW-MTJ neurons. The next step to this research involves applying the findings to larger sets of multiple neurons.