Technical Article

Signal Processing Using Neural Networks: Validation in Neural Network Design

January 28, 2020 by Robert Keim

This article explains why validation is particularly important when we’re processing data using a neural network.

AAC's series on neural network development continues here with a look at validation in neural networks and how NNs function in signal processing.

  1. How to Perform Classification Using a Neural Network: What Is the Perceptron?
  2. How to Use a Simple Perceptron Neural Network Example to Classify Data
  3. How to Train a Basic Perceptron Neural Network
  4. Understanding Simple Neural Network Training
  5. An Introduction to Training Theory for Neural Networks
  6. Understanding Learning Rate in Neural Networks
  7. Advanced Machine Learning with the Multilayer Perceptron
  8. The Sigmoid Activation Function: Activation in Multilayer Perceptron Neural Networks
  9. How to Train a Multilayer Perceptron Neural Network
  10. Understanding Training Formulas and Backpropagation for Multilayer Perceptrons
  11. Neural Network Architecture for a Python Implementation
  12. How to Create a Multilayer Perceptron Neural Network in Python
  13. Signal Processing Using Neural Networks: Validation in Neural Network Design
  14. Training Datasets for Neural Networks: How to Train and Validate a Python Neural Network

 

The Nature of Neural-Network Signal Processing

A neural network is fundamentally different from other signal-processing systems. The “normal” way to achieve some sort of signal-processing objective is to apply an algorithm.

In this model, a researcher creates a mathematical method for analyzing or modifying a signal in some way. There are methods for removing noise from audio, finding edges in images, calculating temperature from the resistance of a thermistor, determining the frequency content of an RF waveform, and so forth. The designer then builds upon the work of the researcher by converting that method into an algorithm that can be carried out by a processor and adapted to the needs of a given application.

 

A FIR filter is an example of a signal-processing system that we can assess and understand in a precise mathematical way. 
 

A trained neural network, on the other hand, is an empirical system.

The mathematical processes that occur in the network do not constitute a specific algorithm that is intended to classify handwritten characters, or predict the formation of tornadoes, or develop control procedures for extreme aeronautical maneuvers. Rather, the math in the neural network is a framework that enables the network to create a customized computational model based on training data.

We understand the mathematical framework that allows a neural network to learn and achieve its required functionality, but the actual signal-processing algorithm is specific to the training data, the learning rate, the initial weight values, and other factors.

 

 

A neural network, unlike a FIR filter, is dependent upon many different factors.

 

It’s like the difference between learning a language as a child and studying a language as an adult.

A child who has never even heard the word “grammar” can repeatedly produce the correct verb form because his or her brain has naturally recognized and retained patterns contained in the enormous quantity of linguistic input data that children receive from older people with whom they interact.

Adults, however, usually don’t have access to all this input and may not assimilate patterns in the same way, and consequently we memorize and implement the linguistic “algorithms” that enable us to correctly conjugate verbs and choose tenses.

 

The Importance of Validation

Neural networks can solve extremely complex problems because when given abundant input, they “naturally” find mathematical patterns similar to how children find linguistic patterns. But this approach to signal processing is by no means infallible.

Consider English-speaking children who say “goed” instead of “went” or “holded” instead of “held.” These are called overregularization errors. They’ve picked up on the -ed pattern for past tense, but for some reason—maybe insufficient data, or cognitive idiosyncracies—they haven’t yet refined their linguistic model to account for verbs that are irregular in the past tense.

No one, of course, is going to castigate a four-year-old for saying “I goed to the park.” But if a prominent politician were delivering an important speech and repeatedly said “goed,” “holded,” “finded,” “knowed,” and so forth, the audience would be seriously displeased (or utterly perplexed) and the speaker’s political career might come to an abrupt end.

These overregularization errors are a good example of how a trained neural network might have unexpected gaps in its ability to achieve the desired signal-processing functionality. And though little gaps may seem unimportant, or even interesting, when we’re merely performing experiments, the example of the politician reminds us that they could be catastrophic in a real application.

 

Both undertraining and overtraining can result in unexpected and problematic behavior when the network confronts real application data. See Part 4 for more information.
 

And now we see why validation is a crucial aspect of neural-network development. Training isn’t enough, because a training data set is inherently limited and therefore the network’s response to this data set is also limited.

Furthermore, training results in a “black box” computational system that we cannot analyze and assess as though it were a typical formula or algorithm. Thus, we need to validate, which I would define as doing everything we reasonably can to ensure that the network will successfully process typical real-life input data and will not produce spectacular failures when presented with atypical data.

 

Sorting Through the Terminology

The procedure that I identify as “validation” might also be called “verification” or simply “testing.”

In the context of software development, the first two terms have distinct meanings. Wikipedia, citing Barry Boehm, says that verification seeks to determine if the product is being built correctly, and validation seeks to determine if the correct product is being built. Since both of these issues are essential, you will see the abbreviation “V&V” for “verification and validation.”

I’m not a software engineer, so hopefully that means I’m not obligated to adopt this paradigm. I am simply using the term “validation” to refer to the testing, analysis, and observation that we perform in an attempt to ensure that the trained neural network meets system requirements.

 

Closing Thoughts: What Exactly Is Validation?

Well, that depends.

NASA, for example, published a fairly long document entitled “Verification & Validation of Neural Networks for Aerospace Systems.” If you’re more interested than I am in neural-network V&V, you might want to start with this document. If you’re a true V&V zealot, you should consider the book Methods and Procedures for the Verification and Validation of Artificial Neural Networks; it’s 293 pages long and surely exceeds my knowledge of this topic by at least three orders of magnitude.

In my world of simple neural networks developed for experimental or instructional purposes, validation primarily means running the trained network on new data and assessing classification accuracy, and we could also include fine-tuning that helps us to determine if and how the overall performance can be improved.

We’ll look at specific validation techniques in future articles.