Technical Article

What is it About Audio Distortion? Understanding Nonlinearity

April 10, 2022 by John Woodgate

Learn about how system nonlinearity creates distortion in audio signals that impacts the sounds we hear. We will examine sine waves, harmonics, and intermodulation distortion.

We spend a lot of time thinking and talking about distortion in audio, and even sometimes listening to it, but what is it really and why does it matter? 

There are generally two types of distortion: 

  • Frequency distortion—caused by insufficient bandwidth and non-flat frequency response between the bandwidth limits
  • Nonlinearity distortion—caused by nonlinearities in the hardware.

This article is about nonlinearity distortion because frequency distortion is rarely a problem these days with modern equipment. Nonlinearity distortion is often, incorrectly, called 'nonlinear distortion'. However, the distortion is not nonlinear, it's the hardware devices that are nonlinear.

 

Sine Waves—the Building Blocks of Audio Signals

A good place to start is by explaining that a sine wave signal (Figure 1) has just a single frequency. At audio frequencies, that sine wave is heard as a single tone.

 

Sine wave signal

Figure 1. Sine wave signal

 

Designers can build up all other waveforms from collections of many sinewaves. It is possible to build up waveforms from other building block signals besides sine waves. However, the building block signals, the processes, and the mathematics can be far more complicated. So, there doesn't seem to be a point in going down that route for audio. 

Sine waves are generated by many forms of natural vibration or oscillation. Consider the balance wheel of a clock, having a horizontal axle. If we make a mark on the edge of the wheel and look at it from the side, the height of the mark above (and below) the axle traces out a sine wave in time.

 

Nonlinearity Creates Audio Distortion

Nonlinearity distortion is caused by some device in the audio signal path producing an output amplitude that is not strictly proportional to the input amplitude. This could be an audio amplifier or even a tarnished connector. This nonlinearity distorts the signal such that the output waveform is not the same as the input waveform. 

Now, whatever the input waveform is, it can be considered the sum of many sine waves, and so can the output waveform. Still, since the waveforms differ, the output waveform includes sine wave components that are not in the input waveform. They are distorted. 

For each sine wave component of the input signal (called the fundamental), the nonlinearity produces signals at multiples of that component's frequency, which are called harmonics. The double-frequency one is called the second harmonic or the harmonic of order 2. The three-times frequency one is the third harmonic. These new signals are harmonic distortion components.

Figure 2 shows the input and output signals from a very bad circuit that produces a 20% amplitude second harmonic and a 10% third harmonic. The addition of the second and third harmonic signals to the output distorts the signal so that it is no longer a pure sine wave. These high distortion figures are chosen to show the effects clearly. 

 

Distorted waveform

Figure 2. Example of nonlinearity distortion. The pure sinusoidal input signal is brown and the distorted output signal is black.

 

The second harmonic always causes the positive half-cycles to be a different shape from the negative half-cycles. In this example, the third harmonic affects the peaks of the signal. That is because I chose the phase angle of the third harmonic with respect to the fundamental's phase angle to do just that. A different phase angle would cause a different shape change.

 

Harmonic Examples in Musical Instruments

The sounds produced by almost all musical instruments (and human voices) contain many harmonics. For example, sine waves are produced by the ocarina, a small hand-held wind instrument that is similar to the harmonica, which, as its name suggests, produces lots of harmonics in every note. 

Adding a few more harmonics changes the tone or timbre of the note. Unless the additions are large, the difference is usually difficult for anyone to detect unless they can distinguish the sound of a Stradivarius from that of a Guarneri. 

Figure 3 shows the 'harmonic spectrum' of a violin. A spectrum is usually a plot of power or voltage against frequency. 

 

The spectrum of a violin

Figure 3. The frequency spectrum of a violin displayed as harmonic multiples

 

The art of making a good violin is choosing the woods, their treatment, and their shaping to produce the most desired mix of harmonic amplitudes. 'Most desired' might be 'mellow' or 'exciting' depending upon the musical genre or simply personal preference. 

However, if the additions of extra harmonics are significant, a new effect becomes audible. The nonlinearity produces harmonics at exactly twice, three times, etc., the original frequency. The harmonics produced by most instruments should really be called 'partials' because they are not exact multiples of the lowest frequency present (the fundamental).

These partials are, for some instruments, as loud as, or even louder than, the fundamental. A flute, for example, produces a nearly equal fundamental and second harmonic. 

The partial and the nearest harmonics create a new frequency component that appears at the difference between the partial and the harmonic frequencies. This attribute is always a much lower frequency than both, a sort of growl that tends to make the sound rough rather than smooth. 

Also produced is a component at a new frequency that is the sum of the partial and harmonic frequencies. This new frequency doesn't affect the combined sound as much, but it is a much higher frequency and may clash with other signal components that are in that higher frequency range.

Thankfully, the effect of new frequency components at the difference and sum frequencies is minimal unless the nonlinearity is very severe.

Unfortunately, that is not the whole story. The nonlinearity also causes signals at these sum and difference frequencies to appear for every combination of two components in the input signal. These new frequencies are called 'intermodulation distortion components.' 

 

Intermodulation Distortion

These components are much more serious, and the effects of many new frequencies are usually very audible, even if the nonlinearity is fairly mild. Why then do we mainly talk about, and measure, harmonics rather than intermodulation components?

There are two reasons. First, it used to be easier to measure harmonics, but that isn't an issue with modern digital instruments. Secondly, we measure harmonics with the simplest possible input signal—a sine wave. 

To measure intermodulation, we have to put at least two signals in, and they can both be sine waves, but what frequencies shall we use, and should they have equal amplitudes (voltages) or different? 

Until the 1970s, there was a lot of confusion about this, and people made different choices, so their results were not comparable. An international agreement was then reached by most of the world audio community, represented on a technical committee of the International Electrotechnical Commission. This commission specified two sorts of intermodulation distortion: difference-frequency distortion (formerly called CCIR distortion) and modulation distortion (SMPTE distortion). 

Difference-frequency distortion (see Figure 4), as is implied, measures the relative amplitude of the 1 kHz difference-frequency signal produced by two equal high-frequency signals, such as 19 kHz and 20 kHz. It is a more important evaluation because it is a measure of linearity at high frequencies, at which the distortion-reducing effect of negative feedback tends to be less. In Figure 4, the input signals at 19 kHz and 20 kHz create two distortion signals:

  1. Difference signal at 1 kHz (20 - 19 = 1)
  2. Sum signal at 39 kHz (20 + 19 = 39) 

 

Difference-frequency distortion

Figure 4. Difference-frequency distortion

 

Modulation distortion uses a low-frequency signal and a much higher frequency signal at a lower voltage, usually a quarter of that of the other signal. For example, the signals might be 80 Hz and 5 kHz. As seen in Figure 5, the nonlinearity again creates two new output distortion signal components:

  1. Difference signal at 4920 Hz (5000 - 80 = 4920)
  2. Sum signal at 5080 Hz (5000 + 80 = 5080) 

 

Modulation distortion

Figure 5. Modulation distortion

 

It's also possible to measure other intermodulation components produced from the harmonics of the input signal components. For example, if we have two input frequencies f1 and f2, there are intermodulation components at 2f1 ± f2 and 2f2 ± f1, as well as those we have already seen, f1 – f2 and f1 + f2. But these do not tell us much more about the performance of the device being measured.

The conclusion we should draw is that we should eliminate all sources of nonlinearity so as not to damage the reproduced sound. However, the devices we have to use in amplifiers, i.e., transistors (or valves/tubes in the past), are inherently nonlinear, so we have to use carefully-chosen design techniques to reduce nonlinearity as much as possible. 

That often raises the question, "How much is enough?" Audiophiles often argue about this question with respect to the sensitivity of human hearing and how much distortion we can actually hear. But, that is too big of a topic for this article.

2 Comments
  • E
    elye April 14, 2022

    Good to read this article:
    https://www.allaboutcircuits.com/technical-articles/40-year-love-affair-low-distortion-class-b-audio-amplifier-audiophiles/
    It really opened my eyes. The proof of the pudding is in the eating.

    Like. Reply
  • D
    dermot June 13, 2022

    I have just read the above article and it resonated with me.  Have a look at my article on the Blomley amplifier (“My 40-year love affair with an extraordinary amplifier” by Dermot Herron) (see above comment by elye also).
    Peter Blomley designed a near-perfect class-B audio amplifier by separating the splitting of the signal into top and bottom halves and the separate amplifiers for those halves.  His amplifier has less than 0.5% harmonic distortion when no feedback is applied.  A normal class-B amplifier has more than 30% harmonic distortion without feedback.  A class-A amplifier has more than 8% harmonic distortion without feedback. 
    In other words, the non-linearity of either amplifier without feedback is very bad indeed.  A Blomley amplifier without feedback at all is a reasonably adequate amplifier and with feedback is spectacular.
    It shows in fact that the ONLY component in a “hi-fi” that matters is the amplifier because all the other modern components (turntable, needle and loadspeakers) are very linear.
    One thing I have found is that ALL linear ICs HAVE BAD INTERMODULATION DISTORTION, even the so-called hi-fi ICs.  So the preamp MUST be made exclusively with discrete transistors.  Most modern preamps have, for convenience, one or more ICs in the signal path which introduces intermodulation distortion to the signal.

    Like. Reply