Interface-type Memristive Device Pushes Neuromorphic Computing Onward
A new interface-type memristive device may be used to build artificial synapses for the next wave of neuromorphic computing.
Researchers from the Center for Integrated Nano Technologies (CINT) of the Los Alamos Laboratory recently published details on a neuromorphic computing device that mimics the behavior of a neural synapse. The result? Computing that achieved 94.72% accuracy in recognizing handwritten numbers.
The device is an interface-type memristor made of gold/niobium-doped strontium titanate—all in a Schottky structure (Au/Nb:STO). The analog resistance of the device can be controlled through the memristor interface, and with these materials, Schottky barrier parameters, such as voltage polarity and amplitude, can be modified to change the conductance of the device.
Image used courtesy of the Center of Integrated Nanotechnologies
Memristors are a promising technology in neuromorphic computing since they can be programmed and “remember” even when powered off. This mimics “synaptic plasticity”, which is an important foundation in memory and learning in the brain. It allows synapses to strengthen or weaken based on their activity and is controlled by the neurotransmitter receptors on the synapse.
In addition to synaptic plasticity, the researchers' prototype can also mimic other synaptic functions such as paired-pulse facilitation, short-term potentiation and depression, long-term potentiation and depression, and spike-timing-dependent plasticity. The Los Alamos team posits that their new device may sidestep the traditional challenges of the von Neumann bottleneck.
Solving the von Neumann Bottleneck
The von Neumann Bottleneck describes a problem in classic computer architecture where processing and memory are separate. To get information to a computer’s central processing unit (CPU) or to a graphical processing unit (GPU), data must be read from memory and then transferred via a data bus.
The bottleneck occurs during this data transfer. Researchers have expended significant effort over the years to minimize this bottleneck, using strategies such as prefetching, speculative execution, or caching. However, data rates are still constrained to some degree, which can be a challenge when datasets such as images or video need to be transferred and processed.
This data transfer consumes significant energy; in a world where data centers are used for applications such as machine learning, energy use is also a growing concern for both costs and environmental impact.
Memristor devices such as the one devised by CINT have the potential to perform both the processing and data storage in the same physical device, which not only overcomes the bottleneck of data transfer but can also reduce energy consumption as well.
Significance of MNIST Results
The Modified National Standards and Technology (MNSIT) dataset of handwritten digits is often used as a benchmark for machine learning and image classification performance. The dataset contains a collection of 28 x 28 grayscale handwritten numbers from 0 to 9.
Performance results of the interface-type memristor device. Image used courtesy of the Center of Integrated Nanotechnologies
The CINT team used a Crossbar Simulator to build a three-layered neural network and trained it using backpropagation over 25 epochs using the long-term potentiation and long-term depression synaptic functions. The simulation achieved 94.72% prediction accuracy. The researchers state that this is superior to other candidate memristor architectures, such as conductive filament-type memristors.
Today’s benchmarks are in the ~99.8% accuracy range. However, the CINT results of 94.72% are still notable, considering that this device is in its early phases of research.