Intel’s Loihi chip applies the principles found in biological brains to computer architectures. With Pohoiki Beach, researchers now have a powerful platform from which to explore specialized applications like sparse coding, graph search, and constraint-satisfaction problems.
Intel Corporation’s self-learning neuromorphic research chip, code-named Loihi. Image from Intel Corporation
At the DARPA’s Electronics Resurgence Initiative (ERI) Summit, taking place from July 15 through the 27th, Rich Uhlig, managing director of Intel Labs, will unveil Pohoiki Beach. He will also recommit to Intel’s promise to deliver a 100-million neuron system later this year.
According to, Uhrig, “We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems.”
What Is Project Loihi?
As described in Intel’s article on Loihi from 2017, Dr. Michael Mayberry describes the then Loihi test chip as a device that mimics the brain’s basic mechanics. And, as brains process information and communicate via “spikes,” so does Loihi.
This is a radically different approach than the clock pulse mediated processes adhered to by conventional CPUs. And like the biological brain, Loihi consumes little power when it isn’t “spiking.” These factors, according to Intel, allow Loihi to process information up to three orders of magnitude faster and with up 10,000 times greater efficiency.
The Impact of Loihi-Based Systems
Loihi-based systems represent a new type of specialized architecture with huge potential for all types of appropriate types of neural-inspired algorithmic research. For these types of applications, tremendous increases in speed and efficiency can be expected. The applications include:
- Autonomous vehicles
Chris Eliasmith, co-CEO of Applied Brain Research and professor at the University of Waterloo, claims that with the Loihi chip, power savings of better than 100 to 1 when running a deep learning benchmark are obtained. On scaling up the network by 50 times, he says, “Loihi maintains real-time performance results and uses only 30 percent more power.”
Professor Konstantinos Michmizos of Rutgers University reported that Loihi enabled his lab to “realize a spiking neural network that imitates the brain’s underlying neural representations and behavior.” When benchmarking the project’s SLAM (simultaneous localization and mapping) solution, it was found to consume a hundred times less power than more classical solutions based non-Loihi-based solutions.
At the Telluride Neuromorphic Cognition Engineering Workshop, also taking place this week, investigators are employing Loihi-based systems to delve into leading-edge challenges on the forefront of neuromorphic engineering. These include:
- The AMPRO prosthetic leg
- Neuromorphic sensing
- Inferring tactile input to the electronic skin of an iCub robot
The Progress of Loihi: Kapoho Bay
Loihi was introduced by Intel in 2017, and in 2018 the Intel Neuromorphic Research Community (INRC) was established with the goal of furthering the development of neuromorphic algorithms, software, and applications.
INRC now offers Kapoho Bay, a USB-based system, with the goal of encouraging further development based on neuromorphic technologies.
The Kapoho Bay USB. Image from Intel Corporation
The next step from Intel, expected later this year, will be the introduction of Pohoiki Springs. This new system, to be built on what has already been established by Pohoiki Beach, will deliver an even greater level of performance and efficiency.
Other Neuromorphic Efforts
Neuromorphic-based computing is still a very new field, but Intel is not entirely alone in its efforts. IBM’s TrueNorth chips are the basis of large scalable neuromorphic systems. Systems with 16 million neurons and four billion synapses have already been built.
SpiNNaker, meanwhile, is not a single chip ready for commercialization. As reported in November, SpiNNaker is a million-core supercomputer, a massively parallel macro computer system designed to simulate up to a billion neurons.