Mathematical rules are based on the defining limits we place on the particular numerical quantities dealt with.
When we say that 1 + 1 = 2 or 3 + 4 = 7, we are implying the use of integer quantities: the same types of numbers we all learned to count in elementary education.
What most people assume to be self-evident rules of arithmetic—valid at all times and for all purposes—actually depend on what we define a number to be.
For instance, when calculating quantities in AC circuits, we find that the “real” number quantities which served us so well in DC circuit analysis are inadequate for the task of representing AC quantities.
We know that voltages add when connected in series, but we also know that it is possible to connect a 3-volt AC source in series with a 4-volt AC source and end up with 5 volts total voltage (3 + 4 = 5).
Does this mean the inviolable and self-evident rules of arithmetic have been violated?
No, it just means that the rules of “real” numbers do not apply to the kinds of quantities encountered in AC circuits, where every variable has both a magnitude and a phase.
Consequently, we must use a different kind of numerical quantity, or object, for AC circuits (complex numbers, rather than real numbers), and along with this different system of numbers comes a different set of rules telling us how they relate to one another.
An expression such as “3 + 4 = 5” is nonsense within the scope and definition of real numbers, but it fits nicely within the scope and definition of complex numbers (think of a right triangle with opposite and adjacent sides of 3 and 4, with a hypotenuse of 5).
Because complex numbers are two-dimensional, they are able to “add” with one another trigonometrically as single-dimension real numbers cannot.
Logic is much like mathematics in this respect: the so-called “Laws” of logic depend on how we define what a proposition is.
The Greek philosopher Aristotle founded a system of logic based on only two types of propositions: true and false.
His bivalent (two-mode) definition of truth led to the four foundational laws of logic: the Law of Identity (A is A); the Law of Non-contradiction (A is not non-A); the Law of the Excluded Middle (either A or non-A); and the Law of Rational Inference.
These so-called Laws function within the scope of logic where a proposition is limited to one of two possible values, but may not apply in cases where propositions can hold values other than “true” or “false.”
In fact, much work has been done and continues to be done on “multivalued,” or fuzzy logic, where propositions may be true or false to a limited degree.
In such a system of logic, “Laws” such as the Law of the Excluded Middle simply do not apply, because they are founded on the assumption of bivalence.
Likewise, many premises which would violate the Law of Non-contradiction in Aristotelian logic have validity in “fuzzy” logic. Again, the defining limits of propositional values determine the Laws describing their functions and relations.
The English mathematician George Boole (1815-1864) sought to give symbolic form to Aristotle’s system of logic.
Boole wrote a treatise on the subject in 1854, titled An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which codified several rules of relationship between mathematical quantities limited to one of two possible values: true or false, 1 or 0.
His mathematical system became known as Boolean algebra.
All arithmetic operations performed with Boolean quantities have but one of two possible outcomes: either 1 or 0.
There is no such thing as “2” or “-1” or “1/2” in the Boolean world. It is a world in which all other possibilities are invalid by fiat.
As one might guess, this is not the kind of math you want to use when balancing a checkbook or calculating current through a resistor.
However, Claude Shannon of MIT fame recognized how Boolean algebra could be applied to on-and-off circuits, where all signals are characterized as either “high” (1) or “low” (0).
His 1938 thesis, titled A Symbolic Analysis of Relay and Switching Circuits, put Boole’s theoretical work to use in a way Boole could never have imagined, giving us a powerful mathematical tool for designing and analyzing digital circuits.
In this chapter, you will find a lot of similarities between Boolean algebra and “normal” algebra, the kind of algebra involving so-called real numbers.
Just bear in mind that the system of numbers defining Boolean algebra is severely limited in terms of scope, and that there can only be one of two possible values for any Boolean variable: 1 or 0.
Consequently, the “Laws” of Boolean algebra often differ from the “Laws” of real-number algebra, making possible such statements as 1 + 1 = 1, which would normally be considered absurd.
Once you comprehend the premise of all quantities in Boolean algebra being limited to the two possibilities of 1 and 0, and the general philosophical principle of Laws depending on quantitative definitions, the “nonsense” of Boolean algebra disappears.
It should be clearly understood that Boolean numbers are not the same as binary numbers.
Whereas Boolean numbers represent an entirely different system of mathematics from real numbers, binary is nothing more than an alternative notation for real numbers.
The two are often confused because both Boolean math and binary notation use the same two ciphers: 1 and 0.
The difference is that Boolean quantities are restricted to a single bit (either 1 or 0), whereas binary numbers may be composed of many bits adding up in place-weighted form to a value of any finite size.
The binary number 100112 (“nineteen”) has no more place in the Boolean world than the decimal number 210 (“two”) or the octal number 328(“twenty-six”).
In Partnership with NXP Semiconductors
by Jake Hertz
Repeated section “Boolean Algebra vs. “Normal Algebra””