News

French Researchers Exploit Non-Idealities of Memristors to Bring AI to the Edge

January 23, 2021 by Jake Hertz

Keeping with the trend to depart from von Neumann architectures, French researchers have overcome non-ideal properties of memristors to bring Bayesian networks to the edge.

As the academic community and semiconductor developers have drilled down on low-power consumption AI, many have moved away from the von Neumann architecture. The reason is that neural networks running on von Neumann devices require large amounts of data to be moved in and out of memory, proving to be energy inefficient. 

 

Data movement energy has become the single biggest contributor to overall chip energy consumption

Data movement energy has become the single biggest contributor to overall chip energy consumption. Image used courtesy of Feng Shi et al. 
 

An alternative approach growing in popularity is in-memory computing schemes, where, as the name suggests, computation occurs where data is stored. One technology that is being widely studied for its applications in this field is resistive RAM (RRAM), also referred to as memristors. Yet, like all technologies in their infancies, memristors have many non-ideal properties that have hindered their use in commercial applications. 

Now, researchers from CEA-Leti in France have discovered a technique to use RRAM non-idealities to their advantage for low-power AI use cases. 

 

What Are the Strengths of RRAM? 

A resistive RAM is a device with a resistance value that can be set by applying a control signal in the form of external voltage/current. The device generally consists of two metal electrodes around a resistive oxide layer.

 

RRAM working principle

RRAM working principle. Image used courtesy of Meena et al.
 

There are many reasons why RRAMs are sought after in the world of edge computing. For starters, they are a non-volatile form of memory. Among non-volatile memory technologies, they offer extremely high switching speeds—higher than NAND flash, in fact—with timescales as short as 10 ns. Further, they can be as small as a few nanometers, meaning that they can offer extremely high densities. 

Finally, they are widely used for machine learning because they draw significantly less power than NAND flash. Since they are resistance-based devices, they can perform dot-product or multiply-and-accumulate operations relying simply on Kirchhoff’s current law. Both of these operations are fundamental in machine learning. 

 

Non-Idealities of RRAM

Despite RRAM's useful attributes, this device does face non-idealities, impeding its widespread integration into the industry. 

One major issue is the “cycle-to-cycle conductance variability” of memristors. When appropriate voltages are applied to memristors, they form conductive paths called filaments, which are the result of metal migration and physical defects. This is the “on” state of a memristor. This state can be reversed with a different external voltage to form an “off” state.

 

Conductance variability can be treated as a gaussian random variable

Conductance variability can be treated as a gaussian random variable. Image used courtesy of Dalgarty et al.
 

The issue is that the creation of these filaments cannot be controlled perfectly and precisely. There are manufacturing and other random sources of error that create cycle-to-cycle conductance variability. This is a source of randomness in the operation of memristors, hindering accurate calculations and often requiring significant resources to mitigate. 

 

CEA-Leti Focuses on In-Situ Learning

Now, French company CEA-Leti has come up with a technique that exploits this randomness in their favor to achieve low-power AI processing

In their article published in Nature, the researchers describe their technique, which consists of a “Markov Chain Monte Carlo (MCMC) sampling learning algorithm in a fabricated chip that acts as a Bayesian machine-learning model.” They say their work allows in-situ learning by using nanosecond voltage pulses on these nanoscale memory devices, effectively creating a system that is small in size and low in power. 

 

Bayesian ML

Researchers concluded that the Bayesian ML can be an alternative modeling method to advance edge learning. Image used courtesy of CEA-Leti
 

The researchers feel confident that their low-power solution may be a fit solution for edge computing. In fact, compared to a CMOS implementation of their algorithm, their approach achieved five orders of magnitude less energy. 

 

A Stride to Simplify Edge Computing Designs

With edge computing becoming a high-demand characteristic in IoT devices, this news may spark further innovations for the designers tasked with such projects. CEA-Leti reports that their ability to turn the negatives of memristors into positives for advanced operational characteristics has made edge AI more feasible than before.