Logical Equivalences

Chapter 7 18 mins

Learning outcomes:

  1. Tautologies, contradictions and contingencies
  2. What are equivalences
  3. Common logical equivalences

Introduction

In the previous chapter, we saw three variations we could make to an implication: converse, inverse and contrapostive, and that which one was similar to the original one.

We saw that only the contrapositive was similar to the orignal implication, in that they both had the same truth value in each valuation.

Two propositions having the same truth value in all valuations represents a useful idea in mathematical logic. Let's explore it.

Tautologies and contradictions

In the chapter on valuations and interpretations, we considered a few compound propositions that gave some interesting outcomes. One of them was true in all valuations, and one was false in all valuations.

Such propositions hold particular importance in the study of mathematical reasoning.

A proposition that is true in all interpretations is called a tautology.

On the same lines:

A proposition that is false in all interpretations is called a contradiction.

A proposition that is neither a contradiction, nor a tautology, is a contingency.

Most of the propositions we've seen so far have been contingent. Let's consider a few propositions and see which category do they fall in.

  1. ::p \vee \bold T::

    ::p::::\bold{T}::::p \vee \bold T::
    ::\bold T::::\bold T::::\bold T::
    ::\bold F::::\bold T::::\bold T::

    All rows in the last column contain a ::\bold T:: value, hence the given proposition is a tautology.

  2. ::p \wedge \bold F::

    ::p::::\bold{F}::::p \wedge \bold F::
    ::\bold T::::\bold F::::\bold F::
    ::\bold F::::\bold F::::\bold F::

    The last column contains all ::\bold F:: values, hence the given proposition is a contradiction.

  3. ::p \vee \bold \neg p::

    ::p::::\neg p::::p \vee \neg p::
    ::\bold T::::\bold F::::\bold T::
    ::\bold F::::\bold T::::\bold T::

    The last column contains all ::\bold T:: values, hence the given proposition is a tautology.

  4. ::p \wedge \bold \neg p::

    ::p::::\neg p::::p \wedge \neg p::
    ::\bold T::::\bold F::::\bold F::
    ::\bold F::::\bold T::::\bold F::

    The last column contains all ::\bold F:: values, hence the given proposition is a contradiction.

  5. ::p \vee q::

    ::p::::q::::p \vee q::
    ::\bold T::::\bold T::::\bold T::
    ::\bold T::::\bold F::::\bold T::
    ::\bold F::::\bold T::::\bold T::
    ::\bold F::::\bold F::::\bold F::

    The last column contains both ::\bold T:: and ::\bold F:: values, hence the given proposition is a contingency.

  6. ::p \wedge q \to \neg q::

    ::p::::q::::p \wedge q::::\neg q::::p \wedge q \to \neg q::
    ::\bold T::::\bold T::::\bold T::::\bold F::::\bold F::
    ::\bold T::::\bold F::::\bold F::::\bold T::::\bold T::
    ::\bold F::::\bold T::::\bold F::::\bold F::::\bold T::
    ::\bold F::::\bold F::::\bold F::::\bold T::::\bold T::
    Based on the general rules of operator precedence i.e which operator is computed before others, this proposition is the same as saying ::(p \wedge q) \to \neg q::.

    The last column contains both ::\bold T:: and ::\bold F:: values, hence the given proposition is a contingency.

  7. ::((p \to q) \wedge (q \to r)) \to (p \to r)::

    ::p::::q::::r::::p \to q::::q \to r::::(p \to q) \wedge (q \to r)::::p \to r::::((p \to q) \wedge (q \to r)) \to (p \to r)::
    ::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::
    ::\bold T::::\bold T::::\bold F::::\bold T::::\bold F::::\bold F::::\bold F::::\bold T::
    ::\bold T::::\bold F::::\bold T::::\bold F::::\bold T::::\bold F::::\bold T::::\bold T::
    ::\bold T::::\bold F::::\bold F::::\bold F::::\bold T::::\bold F::::\bold F::::\bold T::
    ::\bold F::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::
    ::\bold F::::\bold T::::\bold F::::\bold T::::\bold F::::\bold F::::\bold T::::\bold T::
    ::\bold F::::\bold F::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::
    ::\bold F::::\bold F::::\bold F::::\bold T::::\bold T::::\bold T::::\bold T::::\bold T::

    The last column contains only ::\bold T:: values, hence the given proposition is a tautology.

Now that we know what is a tautology, a contradiction and a contingency, we can move over to explore equivalences.

Equivalences

Since it is very common in logic to have multiple propositions with the same truth values under every valuation, we have a specific notation for it.

When two propositions have the same truth values, we say that they are equivalent to one another.

If two propositions ::p:: and ::q:: are equivalent to one another, we write ::\pmb{p \equiv q}::.

::p \equiv q:: is read as '::p:: is equivalent to ::q::'.

Note that ::p \equiv q:: implies that ::p \leftrightarrow q:: is a tautology. That is, if ::p:: is true, then ::q:: is true, and if it's false, then ::q:: is false as well.

Similarly, if ::p:: is not equivalent to ::q::, we write ::p \not\equiv q::.

The first proposition that we saw above was a tautology and we know that by definition, the proposition ::\bold T:: is also a tautology (it's always true). Hence, they both are equivalent to one another. This can be shown as follows:

::p \vee \bold T \equiv \bold T::

Recall the fact that an implication has the same interpretation as its contrapositive, under every valuation. This means that:

::p \to q \equiv \neg q \to \neg p::

We even verified this back in the chapter on implications.

This and many other equivalences can be confirmed by drawing out truth tables and matching the columns for both the propositions on either side of the equivalence symbol. It's only in some cases where the propositions have a lot of variables that working in truth tables becomes inefficient.

But for this course and most of your PL tasks, all equivalences that we deal with won't have these many variables to make computation via truth tables infeasible.

Common equivalences

There are some very common equivalences in propositional logic using which we can often simplify complicated expressions.

First, let's have a look over those equivalences and then see how to use them to simplify complex formulae.

Laws of negation

The laws of negation specify what happens if we take the disjunction and conjunction of a proposition ::p:: with its negation. Hence the name 'negation laws'.

::\begin{aligned} p \vee \neg p &\equiv \bold T \\ p \wedge \neg p &\equiv \bold F \\ \end{aligned}::

The first equivalence is also known as the law of excluded middle. It's one of three laws of thought, which simply asserts that any given proposition is either true or false.

The second equivalence is also one of the laws of thought, formally known as the law of contradiction. It says that a given proposition can't be true and false at the same time.

Laws of idempotence

Next up, we have the laws of idempotence. If you look up the meaning of the word 'idempotence', you'll see it means when something performs something on itself.

Hence, these laws show what happens when we take the disjunction and conjunction of a proposition with itself.

::\begin{aligned} p \vee p &\equiv p \\ p \wedge p &\equiv p \\ \end{aligned}::

Laws of identity

The laws of identity are also interesting consequences.

::\begin{aligned} p \vee \bold F &\equiv p \\ p \wedge \bold T &\equiv p \\ \end{aligned}::

They are referred to as the 'laws of identity' since the proposition ::p::, after the respective operation, is obtained back. Each respective operation is identical to the constituent proposition ::p::.

Laws of domination

The laws of domination are quite similar to the laws of identity.

::\begin{aligned} p \vee \bold T &\equiv \bold T \\ p \wedge \bold F &\equiv \bold F \\ \end{aligned}::

The values ::\bold T:: and ::\bold F:: dominate in each equivalence; hence the name 'laws of domination'.

The propositions on the righthand side follow exactly from the definition of the respective operation being performed on the left-hand side. That is, if a disjunction has a true proposition, then it is true; and if a conjunction has a false proposition, then it is false.

Law of double negation

When we negate a proposition, we get its negation. When we negate it again, we get back the original proposition. This simple and intuitive idea is represented by the law of double negation.

::\begin{aligned} \neg (\neg p) \equiv p \end{aligned}::

Laws of commutativity

Let's say we have a proposition ::p \vee q::. Can we switch places of the atomic propositions here, and still have the same compound proposition?

Well yes! This also applies to conjunctions. The laws of commutativity summarise this idea.

::\begin{aligned} p \vee q &\equiv q \vee p \\ p \wedge q &\equiv q \wedge p \\ \end{aligned}::

You might think this is not a useful property, but there are many concepts in mathematics that don't obey this law.

For example, consider the division ::a / b::. This is not always the same as ::b/a::, since division is not commutative in nature. We can't switch numbers in a division and expect the result to be the same!

Compare this with multiplication. ::a \times b:: is the same as ::b \times a::, since multiplication obeys the laws of commutativity.

Laws of associativity

Moving on, we have another pretty intuitive set of laws known as laws of associativity.

::\begin{aligned} p \vee (q \vee r) &\equiv (p \vee q) \vee r \\ p \wedge (q \wedge r) &\equiv (p \wedge q) \wedge r \\ \end{aligned}::

These laws simply state that it doesn't matter as to which propositions do we associate together and compute first in a disjunction or conjunction of three propositions.

We could associate the first two, or equivalently the last two — it's essentially the same.

Laws of distributivity

Next up, let's consider the laws of distributivity of disjunction over conjunction and of conjunction over disjunction.

::\begin{aligned} p \vee (q \wedge r) &\equiv (p \vee q) \wedge (p \vee r) \\ p \wedge (q \vee r) &\equiv (p \wedge q) \vee (p \wedge r) \\ \end{aligned}::

Can you guess which one of these two is the law of distributivity of disjunction over conjunction? Well, it's the first one. How do we know this?

Very simple — what is being distributed in the first equivalence? It's the disjunction operator. Hence, the first equivalence showcases the distributivity of disjunction over conjunction.

This idea is similar to the idea of distributivity of multiplication over addition, shown as follows:

::\begin{aligned} a \times (b + c) = (a \times b) + (a \times c) \end{aligned}::

Laws of absorption

The laws of absorption are also useful:

::\begin{aligned} p \vee (p \wedge q) &\equiv p \\ p \wedge (p \vee q) &\equiv p \\ \end{aligned}::

Given any one law of absorption, the second one can be derived very easily by applying the law of distributivity on the left-hand side.

De Morgan's laws

The second last and perhaps an extremely useful set of laws in the area of logic is that of De Morgan's laws.

They specify what would happen if we were to take the negation of a disjunction or the negation of a conjunction.

::\begin{aligned} \neg (p \vee q) &\equiv \neg p \wedge \neg q \\ \neg (p \wedge q) &\equiv \neg p \vee \neg q \\ \end{aligned}::

In the next chapter, we'll unravel the idea behind De Morgan's laws, and that how they related closely with intuition of natural language.

Conditional-disjunction equivalence

Being able to simplify implications is necessary when dealing with complex expressions with many of them. In that regards, we can use the conditional-disjunction equivalence to replace the conditional with a disjunction.

::\begin{aligned} p \to q &\equiv \neg p \vee q \end{aligned}::