Prototype Memristor Network Uses Visual Processing Inspired by Mammalian Brains
This News Brief covers a new memristor prototype capable of processing vast amounts of visual data.
This News Brief covers a new memristor prototype capable of processing vast amounts of visual data, created by the University of Michigan. Just in time for a major wave of visual data processing needs.
University of Michigan researchers, led by ECE Professor Wei Lu, have created a successful prototype of a new kind of memristor designed to process mass amounts of visual data.
Visual data processing is a hot topic. Cameras are becoming ubiquitous in drones, ad beacons, and self-driving vehicles, providing mass amounts of information to processors for object detection, facial recognition, and more. The ability to more quickly process large amounts of data will mean better data gathering and ultimately smarter AI.
The memristor chip. Image courtesy of Michigan Engineering.
The Power of Pattern Recognition
The new memristor utilizes pattern recognition, inspired by how mammals see. Namely, the new memristor takes advantage of pattern recognition.
An important application of pattern recognition for humans is facial recognition, the ability to know other humans apart. Another application is the ability to pick out a predator hidden in foliage, identifying a break in the pattern of the environment.
Processing all of the visual data the human brain receives through the eyes would be energy-consuming and time-intensive. Instead of processing each piece of visual data, the brain uses pattern recognition to quickly and efficiently identify shapes that correspond to pertinent objects and movements.
This rapid-fire processing is called "sparse coding". It relies on the ability to access information from previous experiences to form patterns and recognize deviations from them.
An example of how an algorithm picks out faces. Image courtesy of Inseong Kim, Joon Hyung Shim, and Jinmkyu Yang via Stanford University.
The time and energy constraints of the brain are analogous to those required for computers to process visual data. Mass data collection has given rise, quite naturally, to mass data processing. This necessitates sophisticated neural networks to accomplish. As Professor Lu puts it: "The tasks we ask of today's computers have grown in complexity. In this 'big data' era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive, and power-hungry."
In essence, neural networks need to utilize the same shortcuts that human brains do in order to be efficient. Lu and his team have used this inspiration to create memristors that are capable of drawing on a set of image patterns as a reference point for recognizing new images and patterns.
Using this technique, their memristor network has been able to recreate images and test patterns in a lab setting. Scaling up this ability could allow for a memristor-powered system capable of processing visual data the same way that humans do in real-world situations.
Lu and the chip. Image courtesy of Michigan Engineering.
Applications and the Future of AI
Faster processing of visual data is crucial to the future of many technologies. Autonomous vehicles would be able to better react to external factors if the car were able to draw on experience the way human drivers can. The applications of facial recognition programs are diverse, from security down to tagging Facebook users in uploaded photos.
This issue of preserving resources by narrowing a computing system's focus via pattern recognition is also hugely important for the development of AI. Take, for example, game-playing AI systems that can "learn" from experience rather than process each of the mind-bending number of possible moves in a game of Go. This reference to past experiences is another form of pattern recognition that will help shape the future of AI.
According to the U of M team, memristors may be key in allowing for deep neural networks that are capable of learning processes they aren't programmed to execute—one of the major milestones of AI development.
Read more about the project, dubbed "Sparse Adaptive Local Learning for Sensing and Analytics", here.