Technical Article

Understanding Contrast, Histograms, and Standard Deviation in Digital Imagery

September 28, 2020 by Robert Keim

Statistical techniques can help us to analyze the distribution of light and dark tones in an image.

Lately, I’ve written articles about image sensors (starting with this first article in the image sensor technology series) and statistics (I started with an introduction to statistical analysis in electrical engineering). It occurred to me that I would like to write an article about an interesting topic that draws upon both digital imaging and statistical analysis—namely, visual contrast.

 

What Is Contrast?

Brightness and contrast are the two most fundamental characteristics of a grayscale or color photograph. Brightness is straightforward: it describes the overall lightness or darkness of an image. Contrast, on the other hand, is somewhat more complex.

Contrast might be easier to demonstrate than to explain, so let’s begin this discussion with some images:

 

Low, medium, and high contrast

A low-contrast (left), medium-contrast (center), and high-contrast (right) version of the same photograph.  

 

The image in the center is direct from the scanned negative, and the only modification I made to create the other images was increasing or decreasing contrast.

I think that this trio of photographs helps to convey the subtlety and nuance of contrast in the context of everyday imagery. I could create high- and low-contrast versions that are taken to the extreme—i.e., one with contrast increased to the point where almost all of the pixels are completely black or completely white and the other with contrast so low that everything is washed out grayness. However, contrast doesn’t work that way when we’re trying to produce a realistic, visually pleasing image.

A standard way to define contrast is the degree of difference between the lighter and darker tones in an image. “Tone” in this context is essentially synonymous with “brightness,” but the connotation is slightly different: brightness might be interpreted as numerical intensity (for digital data) or measured density (for film), whereas using the word “tone” also evokes the way in which human beings perceive or respond to brightness (and darkness) in an image.

I’m not quite satisfied with the “degree of difference” definition because contrast is more than just a single number that specifies the extent to which gray levels in an image are spread out or squeezed together.

Contrast can be manipulated in diverse ways. Thus, I would suggest that contrast is more fully, though perhaps more abstrusely, defined as the distribution of gray levels in an image insofar as this distribution influences the differences between lighter and darker tones.

 

Contrast and the Image Histogram

Image processing makes extensive use of statistical plots called histograms. With a histogram, we can visually specify 1) the values in an image (or in any other data set) and 2) how frequently these values occur.

Histograms organize and present pixel values in a way that can be extremely informative, and they often function as an essential complement to a holistic evaluation of the image itself. A standard histogram has pixel values that increase as you move from left to right along the horizontal axis.

A histogram gives us clear information about the contrast of an image, and we can also use changes in a histogram to better understand the effects of modifying contrast in some way. Below I have reproduced the three kitten photos shown above, with each one accompanied by its histogram.

 

   

 

As contrast increases, the pixel values shift farther toward the left or right side of the histogram. In an image like this one, which emphasizes dark and light tones with relatively few midtone pixels, increasing the contrast makes the distribution more bimodal: the two peaks are moved farther apart and the “valley” in between the peaks becomes more pronounced.

The effect is somewhat different in an image that emphasizes midtones. For example:


 

Here, the medium-contrast image has one dominant peak near the middle of the pixel-value range. If we decrease contrast, even more pixels are concentrated into this primary peak. If we increase contrast, we create a more even distribution of pixels across the range and secondary peaks become more prominent.

 

Contrast and Standard Deviation

Standard deviation is one of the most important descriptive statistical measures, and it’s explained in detail in my article on average deviation, standard deviation, and variance. Here’s the bottom line: standard deviation conveys the tendency of the values in a data set to deviate from the average value.

The histograms shown above report standard deviation (as well as mean and median). It’s interesting, though not surprising, to note that the standard deviation increases as contrast increases: when we add more contrast to an image, we spread out the histogram such that the overall tendency of the data set is to have greater distance between the individual pixel values and their mean.

In fact, reporting the standard deviation of the pixel values in an image is one way to quantify contrast. This is called RMS (root-mean-square) contrast because calculating standard deviation is a root-mean-square procedure.

 

Conclusion

I hope that you’ve enjoyed this conceptual and statistical exploration of visual contrast. In a future article, we’ll continue this topic by discussing transformation functions and their effect on the contrast of a digital image.