Semiconductor producer SK Hynix have announced that they are partnering up with Stanford University to research and develop a semiconductor device that will resemble the human brain. How will this affect computing in the future and what can we expect from such devices?

Computers, Chess, and a False Victory

Modern computers are remarkable devices which house many billions of transistors and do a phenomenal amount of data processing. For example, scientists have been able to use computers to simulate organisms which consist of 1900 observable parameters and 525 genes. Other examples of computer landmarks include voice recognition, pattern finding, and even election predictions.

However, computers can only do what they are told and merely execute comparisons, if statements, and loops. Because of this fact, computers are not actually intelligent and approach problems with brute force, algorithms, sequential processing, and pattern matching.

Humans have famously entered several competitions of intelligence with computers. The classic illustration of this fact is when world-renowned chess player Garry Kasparov was defeated by IBM's Deep Blue Chess Computer. The computer won the chess game not because of intelligence, intuition, or ability but because it was able to predict many thousands of moves from thousands of combinations in the blink of an eye. 

So the question remains—did the computer really defeat the chess player? Did the computer have an unfair advantage from processing power? Did the computer even know it was playing chess?

 

A computer can play chess well but it does not know it's playing chess. Image courtesy of MichaelMaggs (own work) [CC BY-SA 3.0]

 

In that game, Kasparov's brain was solving many problems and executing many tasks simultaneously but 99% of this processing was not in the chess game. Instead, most of Kasparov's brain was monitoring his oxygen levels, checking his heart rate, adjusting hid body temperature, looking for danger, etc. Yet with all of this going on, he still managed to win two games against Deep Blue and, instead of determining every single move, he used his intelligence to play.

So if a computer can beat a person at chess and not be considered intelligent we have to ask—what is intelligence? What separates a computer from a human?

 

Not-So-Artificial Intelligence

It is not uncommon to see researchers and companies announce that they have developed some form of artificial intelligence in fields like voice recognition or game playing. One recent example is the IBM Watson computer which is frequently tagged as having AI. However, this is not the case at all, especially when you see how it actually works.

During a game of Jeopardy!, Watson was able to answer many questions more consistently and more quickly than its human competitors. But, as it turns out, the system had access to the entire contents of Wikipedia, locally-stored. Imagine a scenario where a human contender had access to such information. That would be considered cheating!

 

Shame on you, Watson, for having all the answers stored! Image courtesy of Clockready (own work) [CC BY-SA 3.0]

 

When a human approaches a problem, they do not need vast stores of information to figure out a solution. This is one of the requirements for something to be considered truly intelligent: the ability to interpolate from missing data to find a solution without needing to perform loads of comparisons.

A classic example is in a game of chess where a human understands that leaving a king exposed or moving a queen out too early in the game is counterintuitive. A computer, however, will consider those moves and not be able to distinguish moves that are sensible from moves that are not. Instead, it will simulate n number of moves, score them based on their likelihood to help them win, and then make said move.

So how can we make a computer truly intelligent? How can we truly create the first real AI? South Korean chipmaker, SK Hynix—partnered with Stanford University—may have the solution.

 

A Computer Like A Brain!

The joint plan developed by SK Hynix and Stanford involves creating a semiconductor device that is modeled around a brain. This means they'd like to develop a computer with the ability to forge connections between small processing units. While not much information is available for how SK Hynix is going to achieve this goal, there are a few clues that can allow for some speculation.

On their website, SK Hynix mentioned the use of a ferroelectric material which is a material whose polarization can be altered by an external electric field. Such polarization is retained even when there is no external field which enables the permanent storage of states (similar to flash memory).

However, they also mention that the polarization status of the material can be partly altered with varying adjustments to the line voltage, which could indicate the use of analog computation. This could reflect the fact that connections between neurons are strengthened as an experience is repeated. This repetition and resultant connection make information easier to recall.

 

A computer built like a brain would revolutionize the world. Image courtesy of Computer World
 

SK Hynix also used the word “neuromorphic” to describe the intended device, which again suggests the use of devices like memristors. A memristor is a hypothetical electronic device whose resistance is dependent on the current that has gone through the device previously. Such a device could simulate the strengthening of connections between processing units which, in turn, could also be used to store information (in a similar fashion to how a brain retains information).

Instead of storing byte of information in processing units, the repeated use of a connection could represent information. The processing unit could measure how strong that connection is and therefore perform an action based upon that.

 

Real AI

Devices that behave like brains may enable the creation of true AI where computers wouldn't need to run code and process data to determine their next move in a chess game. They wouldn't need to use an offline version of Wikipedia to answer a question.

Instead, they could look at the chess board, unconsciously ignore silly moves, and try to determine a good move based on previous experiences of the game. When they are asked a question about something they do not have immediate information on, instead of going to Wikipedia, they could answer based on information they already possess and attempt to formulate an answer.

Another application of such AI would be human/AI interaction. Instead of developing a voice-recognition system by coding words and using large amounts of data storage to record all the different things you say, you would talk to the computer over time, which would strengthen connections between processing units. As you have discussions with the computer, it would eventually be able to understand what you are saying even if you use a different-pitched voice or have slurred speech.

It might even be able to tell what you want it to do just by the emotion in your voice. Someday, if you have a rough day and an AI asks you “How was your day?”, you could respond in a sarcastic tone “It was fantastic!” and the computer could understand that sarcasm is being conveyed.

 

The Future Of AI and Brain-Based Computers

With processing power reaching its limit and silicon real estate becoming ever more precious, it won’t be long before different computational methods are needed to meet the growing demand from consumers. One processing unit that is arguably the most useful currently is the human brain, so a computer that could think like us could result in incredible advancements in technology.

Of course, such development may lead to computers having a sense of self-awareness. This development would raise many questions about our day-to-day lives, as well as many moral quandaries. But among the most pressing of questions would certainly be “Do Computers Dream Of Electric Sheep”?

 

Comments

1 Comment


  • sjgallagher2 2016-11-03

    I’m not sure of a few things presented in this article. For example, ‘did the computer know it was playing chess’. People take the model of our world for granted when considering “intelligence” though it is the most important and most life-like characteristic. The computer didn’t know it was playing chess, because it had no sensory connection to the outside world, no model to use to make sense of things.

    Deep Blue is a poor example because it could not develop a model of the world even if given information from many different sensors etc. But the point remains: whether or not the computer knew it was playing a game is not a measure of intelligence, it is a measure of the computers ability to experience things in a general sense, and then categorize them, and so on.

    Intelligence is also not the ability to fill in knowledge that doesn’t exist. Kasparov used ‘intelligence’ to play thing is not quite true. He used experience, built on by many, many people. People have focus, and can play game after game, trying things until they find something that works. The computer’s inability to focus, to cross off options that are trivial, and so on, hindered its ability to play with ingenuity.

    Not to say that any of that is definitive. I have no idea how Kasparov was able to beat a computer that checks every possible move - chess is a sequential game. But the article does a poor job of examining the problem of intelligence and the problems with previous attempts, and then goes on to show an interesting technology. I love the idea of implementing brain-like hardware instead of just software, but it’s simply hard to examine what such a device would do differently, or better, if you don’t know the state of artificial intelligence.

    Algorithms and hardware based around the brain have been around for decades, and they continue to improve. That is not the fundamentally brilliant property of this new technology. So don’t focus on that, focus on the parts that matter.

    • marcusob 2016-11-15

      Deep Blue, is not a bad example, it was something that people labelled as having the quality of AI. And therefore the author is suggesting Deep Blue was not demonstrating AI when playing chess, and in that sense he is correct. And I would say intelligence is not the most life like quality at all, existence and consciousness, self awareness are all more important qualities when defining life in our higher sense i.e. that of humans. Intelligence is a valuable tool that we have, it does not define life. I think the author did a fairly good job here, he is helping to correct the definition of AI. And that is, that AI is not the effects of a brute force, pattern matching algorithm, it is the mechanism used to solve a problem, so it is the journey not the destination.