News

Neural Networks: The Missing Link of AI?

February 04, 2017 by Robin Mitchell

Advancements have been made towards artificial intelligence, with neural networks leading the way as the most promising method.

Advancements have been made towards artificial intelligence, with neural networks leading the way as the most promising method.

For generations, computers that have the ability to think for themselves have been the stuff of science fiction. Neural networks could provide computers with the ability to identify objects and solve problems. However, scientists can't figure out why the computer reaches the conclusion it does. How can we make computers explain themselves?

Neural Networks in AI

In reality, such a system will not exist for a very long time because of three reasons:

  • Silicon's inability to rewire itself
  • The sheer number of connections in the brain far exceed the number of transistors on modern devices (100 trillion neuron connections against two billion transistors on the top-end Intel processors)
  • The difficulty of recreating the brain in software

 

 

A computer built like a brain would revolutionize the world. Image courtesy of Computer World

 

While computers are currently not self-aware, there are programs and algorithms that enable computers to learn and behave in a similar manner to the human brain. Neural networks are used to teach computers how to identify objects, recognize patterns, and even solve problems by feeding in stimuli, recording the output, and then feeding back a result (whether correct or not). Unlike typical programs, neural networks are not programmed with solutions and therefore have to go through many iterations of learning before they can reliably do their task. 

Neural networks are similar to the human brain as they consist of simple units (similar to neurons) connected to other units via connection paths (neuron connections). Depending on feedback and stimuli, connections become weighted and units have adjusted threshold levels similar to how neurons in the brain work with connections becoming stronger with repeated learning. 

 

Read More

Neural Networks Explained

When neural networks are created and implemented to solve problems, they typically do very well at classifying data. However, one problem that exists with such systems is the act of explaining their answer.

If a person is shown a variety of images, some with people's faces and others without, they can reliably pick the face images (just like a neural network). When asked why they selected specific images a typical response would be “that image has a face because there is a person there who has a round face, eyes, ears, and hair”. However, a neural network would see the face but be unable to explain why there is a face in the photo.

Neural networks that are used to identify objects and images can, however, be taken apart to see what neurons are firing, which helps determine the explanation of the result. For example, a white square with two eyes painted on it may result in the neural network saying there is no face. If the system was taken apart, neurons dealing with eye recognition would be active but the other feature-related neurons would be inactive. This would show that the network detected too few features and concluded that there was no face. However, there are some neural networks that are almost the equivalent of black boxes, such as those designed to read text and handwriting.

 

Neural networks typically hide their inner workings. Image courtesy of Colin M.L. Burnett [CC BY-SA 3.0]

 

Neural networks and artificial intelligence, on the whole, are providing promising results in the medical field involving diagnosis. Doctors, however, require evidence and explanation as to why a diagnosis has been made. This is why researchers from MIT will present new methods for getting a neural network to not only provide predictions and classifications but also explanations as to why they came to their conclusions. But it's not just the medical field that is uneasy on taking advice from a computer, any decision where the cost is high for an incorrect prediction would also require some form of explanation for said decision.

Conclusion

Self-driving cars may have a lower crash rate than human drivers in the future but will people truly trust them over an actual human? And will a patient trust the diagnosis from a computer without being given any evidence? Artificial intelligence has a long way to go but neural networks are showing great promise in making AI a reality. However, unless such systems can explain their answers and thinking processes, the general public may be wary of accepting computer controlled systems with open arms. 

1 Comment