Aquinas’ Third Way argument II – Another counterexample

0. Introduction

In the previous post, I looked at Aquinas’ third way argument, as presented by apologist Tom Peeler. He proposed a causal principle, similar to what Aquinas proposed. Aquinas said:

“that which does not exist only begins to exist by something already existing”.

Peeler said:

“existence precedes causal influence”.

But basically, they are arguing for the same principle, namely:

Causal Principle) For something to begin to exist, it must be caused to exist by some pre-existing object.

From now on, let’s just call that ‘the causal principle’. Peeler was using this principle to support the first premise of his argument, which was:

“If there was ever nothing, there would be nothing now”.

The idea is that if Peeler’s principle were true, then the first premise is true as well. In the previous post, I argued that even if we accept all this, the argument does not show that an eternal being exists. Rather, it is compatible with an infinite sequence of contingent things.

In this post, I want to make a slightly different point. Up to now, we have conceded that the causal principle entails that there are no earlier empty times. However, I want to insist that this is only true if time is discrete. If time is continuous, then the causal principle dos not entail that there are no earlier empty times. I will prove this by constructing a model where time is continuous and at which there are earlier times which are empty, and later times which are non-empty, yet there is no violation of the causal principle.

  1. The causal principle

I take the antecedent of this conditional premise, i.e. “there was ever nothing”, to mean ‘there is some time at which no objects exist’, which seems like the most straightforward way of taking it. Therefore, if the causal principle is to support the premise, the causal principle must be saying that if an object begins to exist, then it must not be preceded by a time at which no objects exist.

Strictly speaking, what the principle rules out is empty times immediately preceding non-empty times. Take the following model, where we have an empty time and a non-empty time, but at which they are not immediately next to one another on the timeline. Say that t1 is empty, and t3 is non empty:

jdksjdksjd

In order to use the causal principle to rule this sort of model out, we need to fill in what is the case at t2. So let’s do that. Either t2 is empty, or it is not. Let’s take the first option. If t2 is empty, then t3 is immediately preceded by an empty time, and we have a violation of Peeler’s principle. Fair enough. What about the other option. Well, if t2 is non-empty, then t3 is not a case that violates Peeler’s principle, because it is not immediately preceded by an empty time. However, if t2 has some object that exists at it, then it is a case of a non-empty time immediately preceded by an empty time, because t1 is empty. Therefore, this second route leads to a violation of Peeler’s principle as well.

The point is that if all we are told is that there is some empty time earlier than some non-empty time, without being told that the empty time immediately precedes the non-empty time, we can always follow the steps above to rule it out. We get to a violation of the causal principle by at least one iteration of the sort of reasoning in the previous paragraph.

However, this whole way of reasoning presupposes that time is discrete rather than continuous. If it is continuous, then we get a very different verdict. That is what I want to explain here. If time is continuous, we actually get an even more obvious counterexample than model 2.

2. Discrete vs continuous

Time is either discrete, or it is continuous. The difference is like that between the natural numbers (like the whole integers, 1, 2, 3 etc) and the real numbers (which include fractions and decimal points, etc). Here is the condition that is true on the continuous number line, and which is false on the discrete number line:

Continuity) For any two numbers, x and y, there is a third number, z, which is in between them.

So if we pick the numbers 1 and 2, there is a number in between them, such as 1.5. And, if we pick 1 and 1.5, then there is a number in between them, such as 1.25, etc, etc. We can always keep doing this process for the real numbers. For the natural numbers on the other hand, we cannot. On the natural numbers, there just is no number between 1 and 2.

A consequence of this is that there is no such thing as the ‘immediate successor’ of any number on the real line. If you ask ‘which number is the successor of 1 on the real number line?’, there is no answer. It isn’t 1.01, or anything like that, because there is always going to be a number between 1 and 1.01, like 1.005. That’s just because there is always going to be a number between any two numbers on the real number line. So there is no such thing as an ‘immediate successor’ on the real number line.

Exactly the same thing imports across from the numerical case to the temporal case. If time is continuous, then there is no immediately prior time, or immediately subsequent time, for any time. For any two times, there is a third time in between them.

This already means that there cannot be a violation of Peeler’s principle if time is continuous. After all, his principle requires that there is no non-empty time immediately preceded by an empty time. And that is never satisfied on a continuous model just because no time is immediately preceded by any other time, whether empty or non-empty. However, even though the principle cannot be violated, this doesn’t immediately mean that it can be satisfied. It turns out, rather surprisingly, that it can be satisfied.

2. Dedekind Cuts

In order to spell out the situation properly, I need to introduce one concept, that of a Dedekind Cut. Named after the late nineteenth century mathematician, Richard Dedekind, they were originally introduced as the way of getting us from the rational numbers (which can be expressed as fractions) to the real numbers (some of which cannot be expressed as fractions). They are defined as follows:

A partition of the real numbers into two nonempty subsets, A and B, such that all members of A are less than those of B and such that A has no greatest member. (http://mathworld.wolfram.com/DedekindCut.html)

We can also use a Dedekind cut that has the partition the other way round, of course. On this version, all members of B are greater than all those of A, and B has no least member (A has a greatest member). This is how we will use it from now on.

3. Model 5

Let’s build a model of continuous time that uses such a cut. Let’s say that there is a time, t1, which is the last empty time, so that every time earlier than t1 is also empty. The rest of the timeline is made up of times strictly later than t1, and they are all non-empty:

sdds

The precise numbers on here are just illustrative. All it is supposed to be showing is that every time up to and including t1 is empty, and that every time after t1 is non-empty. There is no first non-empty time, just because there is no time immediately after t1 at all. But there is a last empty time, which is just t1.

This model has various striking properties. Obviously, because it is a continuous model, there can be no violation of Peeler’s principle (because that requires time to be discrete). However, it is not just that it avoids violating the principle in this technical sense. It also seems to possess a property that actively satisfies Peeler’s causal principle. What I mean is that on this model, every non-empty time is preceded (if not immediately) by non-empty times. Imagine we were at t1.01 and decided to travel down the number line towards t1. As we travel, like Zeno’s tortoise, we find ourselves halfway between t1.01 and t1, i.e. at t1.005. If we keep going, we will find ourselves half way between t1.005 and t1, i.e. t1.0025, etc. We can clearly keep on going like this forever. No matter how close we get to t1 there will always be more earlier non-empty times.

So the consequences can be expressed as follows. Imagine that it is currently t1.01. Therefore, it is the case that some object exists. It is also the case that at some time in the past (such as t1) no objects existed. Whatever exists now could have been brought into existence by previously existing objects, and each of them could have been brought into existence by previously existing objects, and so on forever. So, it seems like this model satisfies Peeler’s version of the causal principle, that existence precedes causal influence, and Aquinas’ version of the principle, that “that which does not exist only begins to exist by something already existing”. Both of these are clearly satisfied in this model, because whatever exists has something existing earlier than it. However, it does so even though there are past times at which nothing exists.

4. Conclusion

The significance of this is as follows. If we assume that time is discrete, then the causal principle entails that there are no empty earlier times than some non-empty time. So if t1 is non-empty, then there is no time t0 such that t0 is empty. So if time is discrete, then the causal principle entails premise 1 of the argument (i.e. it entails that “If there were ever nothing, there would be nothing now”).

But, things are different if time is continuous. In that case, we can have it that the causal principle is true along with there being earlier empty times. The example of how this works is model 5 above. Something exists now, at t1.01, and there are times earlier than this which are non-empty. Every time at which something exists has times earlier than it during which some existing thing could have used its causal powers to bring the subsequent thing into existence. There is never any mystery about where the causal influence could come from; it always comes from some previously existing object. However, there are also empty times on this model, i.e. all moments earlier than or equal to t1. This means that the antecedent of the conditional premise is true (“if there ever was nothing”), but the consequent is false (“there would be nothing now”). So even though the causal principle looks true, the first premise is false. So if time is continuous, then the causal principle (even if granted for the sake of the argument) does not entail the first premise, and so does not support it being true.

Advertisements

Aquinas’ Third Way Argument

0. Introduction

I recently listened to a podcast, where the host, David Smalley, was interviewing a christian apologist, Tom Peeler. The conversation was prefaced by Peeler making the claim that he could prove that God existed without the use of the bible.

The first argument offered by Peeler was essentially Aquinas’ ‘Third Way’ argument, but done in a way that made it particularly easy to spell out the problem with it. In fact, Peeler gave two arguments – or, rather, I have split what he said into two arguments to make it easier to explain what is going on. Once I have explained how the first argument fails, it will be obvious how the second one fails as well. The failures of Peeler’s argument also help us to see what is wrong with Aquinas’ original argument.

  1. Peeler’s first argument

Peeler’s first argument went like this (at about the 23 minute mark):

  1. If there were ever nothing, there would still be nothing
  2. There is something
  3. Therefore, there was never nothing

As Peeler pointed out, the argument is basically a version of modus tollens, and so is definitely valid. But is it sound? I will argue that even if we grant that the argument is valid and sound, it doesn’t establish what Peeler thinks it does.

Here is the sort of consideration that is motivating premise 1. In the interview, Peeler was keen to stress that his idea required merely the fact that things exist and the principle that “existence precedes causal influence”. There is an intuitive way of spelling out what this principle means. Take some everyday object, such as your phone. This object exists now, but at some point in the past it did not exist. It was created, or made. There is some story, presumably involving people working in a factory somewhere, which is the ‘causal origin’ of your phone. The important part about this story for our purposes is that the phone was created via the causal powers of objects (people and machines) that pre-existed the phone. Those pre-existing objects exerted their causal influence which brought the phone into existence; or, more mundanely, they made the phone. The idea is that for everything that comes into existence, like the phone, there must be some pre-existing objects that exert causal influence to create it. As Aquinas puts it: “that which does not exist only begins to exist by something already existing”.

One way to think about what this principle is saying is what it is ruling out. What it is ruling out is that there is a time where no objects exist at all, followed immediately by a time at which some object exists.

Imagine that at time t0, no objects exist at all. Call that an ‘empty time’. Then, at t1 some object (let’s call it ‘a‘) exists; thus, t1 is a ‘non-empty time’. This situation violates Peeler’s causal principle. This is because a has been brought into existence (it has been created), but the required causal influence has no pre-existing objects to wield it. We can picture the situation as follows:

asdada

At the empty time, t0, there is nothing (no object) which can produce the causal influence required to bring a into existence at t1. Thus, the causal influence seems utterly mysterious. This is what Peeler means by ‘nothing can come from nothing.’ So we can understand Peeler’s causal principle in terms of what it rules out – it rules out things coming into existence at times that are immediately preceded by empty times, or in other words it rules out non-empty times immediately following from empty times. Let’s grant this principle for the sake of the argument to see where it goes.

If we do accept all this, then it follows that from the existence of objects, such as your phone, that there can never have been a time at which no objects existed (i.e. that there are no empty times in the past). That’s because of the following sort of reasoning. If this time has an object, such as your phone, existing at it, then this time must not be preceded by a time at which no objects existed. So the phone existing now means that the immediately preceding time has objects existing at it. But the very same reasoning indicates that this prior time must itself be preceded by a time at which objects existed, and so on for all times.

We can put it like this: if this time is non-empty, then so is the previous one. And if that time is non-empty, then so is the previous one, etc, etc. Thus, there can never be an empty time in the past if this time is non-empty.

This seems to be the most charitable way of putting Peeler’s argument.

2. Modelling the argument

For all we have granted so far, at least three distinct options are still available. What I mean is that the argument makes certain requirements of how the world is, for it’s premises and conclusion to be true. Specifically, it requires that a non-empty time not be immediately preceded by an empty time. But there are various ways we can think about how the world is which do not violate this principle. A model is a way that the world is (idealised in the relevant way). If the model represents a way that the world could be on which the premises and conclusion of an argument are true, then we say that the model ‘satisfies‘ the argument. I can see at least three distinct models which satisfy Peeler’s argument.

2.1 Model 1

Firstly, it could be (as Peeler intended) that there is a sequence of non-necessary objects being caused by previous non-necessary objects, which goes back to an object which has existed for an infinite amount of time – an eternal (or necessary) object. Think of the times before t1 as the infinite sequence: {… t-2, t-1, t0, t1}. God, g, exists at all times (past and future), and at t0 he exerted his causal influence to make a come to exist at t1 alongside him:

jkdjks

On this model, there are no times in which an object comes into existence which are immediately preceded by an empty time, so this model clearly does not violate Peeler’s principle. Part of the reason for this is that there are no empty times on this model at all, just because God exists at each time. Anyway, the fact that this model doesn’t violate Peeler’s causal principle means that there is at least one way to model the world which is compatible with Peeler’s argument. The world could be like this, for all the truth of the premises and conclusion of Peeler’s argument requires.

But, this is not the only option.

2.2 Model 2

Here is another. In this model, each object exists for only one time, and is preceded by an object which itself exists for only one time, in a sequence that is infinitely long. Each fleeting object is caused to exist by the previous object, and causes the next object to exist. On this model there are no empty times, so it is not a violation of Peeler’s principle. Even though it does not violate the principle, at no point is there an object that exists at all times. All that exists are contingent objects, each of which only exists at one time.

Think of the times before t1 as the infinite sequence { … t-2, t-1, t0, t1}, and that at each time, tn, there is a corresponding object, bn:

dssds

Thus, each time has an object (i.e. there are no empty times) and each thing that begins to exist has a prior cause coming from an object. No object that begins to exist immediately follows from an empty time. Therefore, this model satisfies Peeler’s argument as well.

2.3 Model 3

There is a third possibility as well. It is essentially the same as the second option, but with a merely finite set of past times. So, on this option, there is a finitely long set of non-empty times, say there are four times: {t-2, t-1, t0, t1}. Each time has an object that exists at that time, just like in model 2. The only real difference is that the past is finite:

sjkdsj

In this case, t-2 is the first time, and b-2 is the first object.

However, there might be a problem with this third option. After all, object b-2 exists without a prior cause. It isn’t caused to exist by anything that preceded it (because there are no preceding times to t-2 on this model). Doesn’t this make it a violation the causal principle used in the argument?

Not really. All that Peeler’s causal principle forbids is for an object to begin to exist at a time immediately following an empty time. But because there are no empty times on this model, that condition isn’t being violated. Object b-2 doesn’t follow an empty time. It isn’t preceded by a time in which nothing existed. It just isn’t preceded by anything.

Now, I imagine that there is going to be some objection to this type of model. Object b-2 exists, but it was not caused to exist. Everything which comes into existence does so because it is caused to exist. But object b-2 exists yet is not caused to exist by anything.

We may reply that object b-2 is not something which ‘came into existence’, as part of what it is for an object x to ‘come into existence’ requires there to be a time before x exists at which it does not exist. Seeing as there is no time before t-2, there is also no time at prior to t-2 at which b-2 does not exist. So b-2 simply ‘exists’ at the first time in the model, rather than ‘coming into existence’ at the first time. Remember how Aquinas put it: “that which does not exist only begins to exist by something already existing”. There is no prior time at which b-2 is “that which does not exist”. It just simply is at the first time.

No doubt, this reply will seem to be missing the importance of the objection here. It looks like a technicality that b-2 does not qualify as something which ‘comes into existence’. The important thing, Peeler might argue, is that b-2 is a contingent thing that exists with no cause for it. That is what is so objectionable about it.

If that is supposed to be ruled out, it cannot be merely on the basis of Peeler’s causal principle, but must be so on the basis of a different principle. After all, Peeler’s principle merely rules out objects existing at times that are preceded by empty times. That condition is clearly not violated in model 3. The additional condition would seem to be that for every non-necessary object (such as b-2), there must be a causal influence coming from an earlier time. This principle would rule out the first object being contingent, but it is strictly more than what Peeler stated he required for his argument to go through.

However, let us grant such an additional principle, just for the sake of the argument. If we do so, then we rule out models like model 3. However, even if we are kind enough to make this concession, this does nothing to rule out model 2. In that model, each object is caused to exist by an object that precedes it in time, and there are no empty times. Yet, there is no one being which exists at all earlier times (such as in model 1).

The existence of such an eternal being is one way to satisfy the argument, but not the only way (because model 2 also satisfies the argument as well). Thus, because model 2 (which has no eternal being in it) also satisfies the argument, this means that the argument does not establish the existence of such an eternal being.

So, even if we grant the premises of the first argument, it doesn’t establish that there is something which is an eternal necessary object. It is quite compatible with a sequence of merely contingent objects.

2. Peeler’s second argument

From the conclusion of the first argument, Peeler tried to make the jump to there being a necessary object, and seemed to make the following move:

  1. There was never nothing
  2. Therefore, there is something that has always been.

The fact that the extra escape routes are not blocked off by the first argument, should give you some reason to expect the inference in the second argument to be invalid. And it is. It is a simple scope-distinction, or an instance of the ‘modal fallacy’.

There being no empty times in the past only indicates that each time in the past had some object or other existing at it. It doesn’t mean that there is some object in particular that existed at each of the past times (such as God). So long as the times are non-empty, each time could be occupied by an object that exists only for that time (as in our second and third models), for all the argument has shown.

The inference in the second argument is like saying that because each room in a hotel has someone checked in to it, that means that there is some particular individual person who is checked in to all of the rooms. Obviously, the hotel can be full because each room has a unique individual guest staying in it, and doesn’t require that the same guest is checked in to every room.

When put in such stark terms, the modal fallacy is quite evident. However, it is the sort of fallacy that is routinely made in informal settings, and in the history of philosophy before the advent of formal logic. Without making such a fallacious move, there is no way to get from the conclusion of Peeler’s first argument to the conclusion of the second argument.

3. Aquinas and the Third Way

In particular, medieval logicians often struggled with scope distinctions, as their reasoning was carried out in scholastic Latin rather than in symbolic logic. That they managed to make any progress at all is testament to how brilliant many of them were. Aquinas is in this category, in my view; brilliant, but prone to making modal fallacies from time to time. I think we can see the same sort of fallacy if we look at the original argument that is motivating Peeler’s argument.

Here is how Aquinas states the Third Way argument:

“We find in nature things that are possible to be and not to be, since they are found to be generated, and to corrupt, and consequently, they are possible to be and not to be. But it is impossible for these always to exist, for that which is possible not to be at some time is not. Therefore, if everything is possible not to be, then at one time there could have been nothing in existence. Now if this were true, even now there would be nothing in existence, because that which does not exist only begins to exist by something already existing. Therefore, if at one time nothing was in existence, it would have been impossible for anything to have begun to exist; and thus even now nothing would be in existence — which is absurd.” Aquinas, Summa Theologiae, emphasis added)

This argument explicitly rests on an Aristotelian notion of possibility. The philosopher Jaakko Hintikkaa explains Aristotle’s view:

“In passage after passage, [Aristotle] explicitly equates possibility with sometime truth, and necessity with omnitemporal truth” (The Once and Future Seafight, p. 465, emphasis added)

This is quite different from the contemporary view of necessity as truth in all possible worlds. On the contemporary view, there could be a contingent thing that exists at all times in some world. Therefore, being eternal and being necessary are distinct on the modern view, but they are precisely the same thing on the Aristotelian view. We will come back to this in a moment. For the time being, just keep in mind that Aquinas, and by extension Peeler, are presupposing a very specific idea of what it means to be necessary or non-necessary.

We can see quite explicitly that Aquinas is using the Aristotelian notion of necessity when he says “…that which is possible not to be at some time is not”. This only makes sense on the Aristotelian view, and would be rejected on the modern view. But let’s just follow the argument as it is on its own terms for now.

The very next sentence is: “Therefore, if everything is possible not to be, then at one time there could have been nothing in existence.” What Aquinas is doing is imagining what would be the case if all the objects that existed were non-necessary objects. If that were the case, then no object would exist at every time, i.e. each object would not exist at some time or other. That is the antecedent condition Aquinas is exploring (i.e. that “everything is possible not to be”).

What the consequent condition is supposed to be is less clear. As he states it, it is “at one time there could have been nothing in existence”. We can read this in two ways. On the one hand he is saying that if everything were non-necessary, then there is in fact an earlier time that is empty. On the other hand, he is saying that if everything were non-necessary, there could have been an earlier time that is empty.

Let’t think about the first option first. It seems quite clear that it doesn’t follow from the assumption that everything is non-necessary that there is some time or other at which nothing exists. Model 2 is an example of a model in which each object is non-necessary, but in which there are no empty times. If Aquinas is thinking that “if everything is possible not to be, then at one time there could have been nothing in existence” means that each object being non-necessary implies that there is an empty time, then he is making a modal fallacy. This time, the fallacy is the other way round from Peeler’s example: just because each guest is such that they have not checked into every room of the hotel, that does not mean there is a room with no guest checked in to it. Think of the hotel in which each room has a unique guest in it. Exactly the same thing applies here too; just because every object is such that it fails to exist at some time, that does not mean that there is a time at which no object exists. Just think about model 2, in which each time has its own unique object.

Thus, if we read Aquinas this first way, then he is committing a modal fallacy.

So let’s try reading him the other way. On this reading he is saying that the assumption that everything is non-necessary is compatible with there being an empty time. One way of reading the compatibility claim is that there is some model on which the antecedent condition (that every object is non-necessary) and the consequent condition (that there is an empty time) are both true. And if that is the claim, then it is quite right. Here is such a model (call it model 4):

sdsdsd

On this model, there are two objects, a and b, and they are both non-necessary (i.e. they both fail to exist at some time). Also, as it happens, there is an empty time, t2; both a and b fail to exist at t2. So on this model, the antecedent condition (all non-necessary objects) and the consequent condition (some empty times) are both satisfied.

However, while this claim is true, it is incredibly weak. The difference is between being ‘compatible with’ and ‘following from’. So for an example of the difference, it is compatible with me being a man that my name is Alex; but it doesn’t follow from me being a man that my name is Alex. If we want to think about the consequent following from the antecedent condition, we want it to be the case that every model which satisfies the antecedent condition also satisfies the consequent condition, not jus that there is some model which does. But it is clearly not the case that every model fits the bill, again because of model 2. It satisfies the condition that every object is non-necessary, but it doesn’t satisfy the condition that there are some empty times.

So what it comes down to is that the claim that there are only non-necessary objects is compatible with the claim that there are empty times, but it is equally compatible with the claim that there are no empty times. Being compatible with both means that it is simply logically independent of either. So nothing logically follows from the claim that there are only non-necessary objects about whether there are any empty times in the past or not.

So on the first way of reading Aquinas here, the claim is false (because of model 2). On the second way of reading him, the claim is true, but it is logically independent of the consequent claim. On either way of reading him, this crucial inference in the argument doesn’t work.

And with that goes the whole argument. It is supposed to establish that there is an eternal object, but even if you grant all of the assumptions, it is compatible with there not being an eternal object.

4. Conclusion

Peeler set out an argument, which was that if nothing ever existed, there would be nothing now. The truth of the premises and the conclusion is satisfied by, or compatible with, model 2, and so does not require that an eternal object (like God) exists. The second argument was that if it is always the case that something exists, then there is something which always exists. That is a simple modal fallacy. Lastly, we looked at Aquinas’ original argument, which either commits a similar modal fallacy, or simply assumes premises which do not entail the conclusion.

 

Inspiring Philosophy and the Laws of Logic: Part 1

0. Introduction

There is a YouTube channel, called Inspiring Philosophy (henceforth IP), which is about philosophical apologetics. It has about 45k subscribers, and the videos have high visual production values. One video in particular caught my attention, as it was about the laws of logic.

Despite the relatively large audience and good production values, IP makes some pretty baffling mistakes, and a lot of them are very easy to spell out. I will try to explain the main ones here.

  1. Confusions

IP’s lack of understanding about the issues involved contributes to a confusion about what is being claimed by his imagined ‘opponents’, and what he is trying to say in reply to them. This fundamental confusion is at the heart of the entire video.

In the very opening section, IP asks two general questions:

“Can we trust the laws of logic? Is logic safe from criticism, or is it just another man made construct built on sand?”

These questions are actually quite vague. What does it mean to ‘trust‘ the laws of logic? Does it just mean ‘Are the laws of logic true?’

More importantly, what exactly does he mean by ‘the laws of logic’? He never specifies what he takes the ‘laws of logic’ to actually be. Commonly in discussions like this, they are taken to be the law of excluded middle, the law of non-contradiction, and the law of identity. They are part of what is known as ‘classical logic‘, which we can think of as a group of logical systems which all share a number of principles, including those laws. We must assume that this is what IP means. Let’s refer to these three laws as the ‘classical laws of logic’

The historical development of logic shows that, in one sense, classical logic is not safe from criticism. Just like mathematics, logic has evolved over time, and it has gone through various changes (see this, and this). In particular, there are logical systems which do not include the classical laws of logic; there are systems of logic which have contradictions in, or which have exceptions to excluded middle, or where identity is treated very differently. So, suppose that IP is asking: ‘are there logical systems which do not include those particular logical laws?’ The answer is: ‘yes, there are non-classical logics‘.

Surely though IP thinks he is asking a more interesting question than this. He wants to ask whether some other non-classical logical system should be regarded as the right one. This is a much more interesting question, and much more difficult to answer. I assume that IP wants to say that the classical laws of logic are the right ones, and all the other non-classical alternatives are not right. That would be a coherent position for him to take: he is defending classical logic against rival non-classical logics.

However, this is not what IP actually articulates throughout the video. The video starts off with a claim which seems to be the target that IP wants to argue against. He says

“Many argue that the laws of logic are not true”.

Here we see the fundamental confusion right at the heart of the video. There are two distinct issues IP never distinguishes between a kind of local challenge to classical logic, and a global challenge to all logic:

Local) “Many argue that classical logic is not the right logic

Global) “Many argue that there is no right logic at all

While the first option is clearly something many people do argue, it is not quite clear whether the second option even makes sense. Are there really such people who argue that there is no such thing as logic? Who are these people? IP doesn’t ever say.

One of the main problems in what follows is that IP switches back and forth between the local and global challenge, as if he is unaware of the distinction.

2. The argument

In the first half of the video, IP offers what he calls a “simple argument” to use as a foil to respond to. He does not say where he got this argument from, but I suspect that he got it from here.

The argument goes like this:

  1. Assume that the laws of logic are true
  2. All propositions are either true or false
  3. The proposition “This proposition is false” is neither true nor false
  4. There exists at least one proposition that is neither true nor false
  5. It is not the case that all propositions are either true or false
  6. It both is and is not the case that all propositions are either true nor false
  7. Therefore, the laws of logic are not true

We need to ignore the fact that the first premise is an example of a command, and is not expressing a proposition. We also need to ignore that the argument is not formally valid; strictly speaking, the conclusion does not formally follow from the premises. You have to assume that by ‘the laws of logic’ we mean to include the law of bivalence. If you want an argument to be formally valid, you cannot keep these sorts of assumptions implicit.

Basically, what is going on with this argument is a challenge to classical logic, or really any logic which has the semantic principle of ‘bivalence’. So it is an example of a local challenge. The principle of bivalence is expressed in premise 2, and it says that each proposition has exactly one of the following two truth values: ‘true’ or ‘false’. This principle is the target of the argument.

The liar’s paradox is notoriously difficult to give a satisfying account of within the constraints of classical logic. Therefore, some people say that the only way to account for it is to give up some aspect of classical logic. Thus, considerations of the liar’s paradox provide some reason for people who argue that classical logic needs to be rejected. In this case, the idea implicit in premise 3 is that the liar’s paradox requires bivalence to be false. They say that the Liar Proposition, i.e. “This proposition is false”, is itself neither true nor false. If they are right about this, then classical logic must be wrong. This is because classical logic says that all propositions are either true or false, but there is a proposition which is neither (i.e. the Liar Proposition).

To defend classical logic against this charge, we would expect IP to argue that the liar’s paradox is not solved by treating the Liar Proposition as neither true nor false, but that it can be solved without giving up any of the assumptions of classical logic. This would undermine the reason given here for thinking that bivalence had an exception.

However, at this point IP starts to show just what a poor grasp he has of what this argument is supposed to be showing, and what he needs to do to defend classical logic against it.

He says that “there are several problems with this argument”, but he criticises premise 2. Now, this is odd, because premise 2 is just an expression of bivalence, which is part of classical logic. If he is defending classical logic, then he should be defending premise 2; yet, he is about to offer a reason to doubt it.

IP says that the problem with premise 2 is that not all propositions are either true or false; some are neither true nor false. His example is the following:

“Easter is the best holiday”.

His reasons for thinking that “Easter is the best holiday” is neither true nor false are strange. He says that that proposition “Cannot be proven true or false” and that it is “just an expression of opinion”. “So,” he continues, “you can have propositions that are neither true nor false. Nothing in either logic or language denies this”.

Now, just hold on a minute. Let’s grant IP’s claim that the proposition “Easter is the best holiday” merely expresses an opinion. This is ambiguous between two different things.  On one hand, saying that it merely expresses an opinion might mean that it is just shorthand for:

“My opinion is that Easter is the best holiday”

If that is what IP means, then surely “Easter is the best holiday” can be true. After all, I have opinions, and sometimes they are true. In particular, the proposition “My opinion is that Easter is the best holiday” is true just so long as I really do prefer Easter to all other holidays. It would be false if I happened to prefer Halloween to Easter, etc. What is supposed to be the problem here? If such propositions are expressions of opinion in this sense, that doesn’t mean that they are not true or false.

On the other hand, “Easter is the best holiday” might not be shorthand for “My opinion is that Easter is the best holiday”. It might be taken to be something like: “Yey! Easter!” If that is what IP means, then it doesn’t have a truth-value, but then it isn’t really a proposition at all.

So, it seems like either “Easter is the best holiday” is a proposition with a truth-value, or it lacks a truth-value precisely because it isn’t a proposition. Either way round, it doesn’t seem to be any reason to doubt bivalence.

He also says that it cannot be proven. But if “Easter is the best holiday” is just taken as a proposition, then it can be proven in the same way as any other proposition:

  1. If p, then “Easter is the best holiday”.
  2. p
  3. Therefore, “Easter is the best holiday”.

Why IP thinks we cannot enter “Easter is the best holiday” into a proof like this is a mystery.

IP concludes that the argument doesn’t work, on the basis that propositions like “Easter is the best holiday” are neither true nor false. As we have just seen, his reasons for thinking that this sort of proposition is neither true nor false are pretty unconvincing. But let’s just grant them for the sake of the argument.

He doesn’t seem to realise that if “Easter is the best holiday” is neither true nor false, then he is effectively conceding exactly the thing that the argument was supposed to be showing, i.e. that there are exceptions to classical logic. If his own example were genuinely an example of a proposition that lacked a truth value, this would be enough to undermine classical logic. So, he isn’t showing something about the argument that is wrong; he is just giving another (albeit more flawed) instance of a counterexample to classical logic.

3. Gödel

At around 2:20, IP moves on to talk about Kurt Gödel:

“The argument itself is based on Gödel’s theorems, which many think shows logic doesn’t work”.

I think what IP has in mind is that there is another type of challenge to classical logic, this time coming from Gödel’s incompleteness theorems. He gives a statement about what the incompleteness theorems show, but it crucially mistakes (and overstates) their true significance. This leaves IP drawing all the wrong consequences.

IP says that Gödel’s incompleteness theorems show that:

“No consistent system of axioms whose theorems can be listed by an ‘effective procedure’ is capable of proving all truth”

This statement stands out a bit in the video, and it sounds like IP has got it from somewhere, but he never gives any citations for this quote, so we have to guess. My first guess was Wikipedia, and I was right. What is revealing about the quote is what he leaves off. Here is how it shows on Wikipedia:

sjdksds

The quote in full (with the bit he missed off in italics) is:

“No consistent system of axioms whose theorems can be listed by an ‘effective procedure’ is capable of proving all truths about the arithmetic of the natural numbers“.

There is a very big difference between showing that no consistent system of axioms can prove all truth, and showing that they cannot prove all truths about the arithmetic of the natural numbers. I don’t know if he didn’t think the extra bit he left off wasn’t important, or if he did it on purpose to jazz up his point, but either way leaving it off completely changes the significance of Gödel’s incompleteness theorems.

The thing is that (when we look at it properly) Gödel’s incompleteness theorems do not pose a direct local challenge to classical logic. What they show is compatible with non-contradiction, excluded middle and the law of identity all being true (along with all the other principles of classical logic).

What the theorems show is that any system of logic that is powerful enough to express all the arithmetic propositions cannot prove all of them.

So, the result applies to a certain type of logic, called ‘mathematical logic’. This logic is built up out of first-order logic, which is itself a very basic type of classical logic (one that respects all the principles IP presumably wants to defend). If you add the right axioms to this logic, then it becomes capable of expressing things like 1+1=2, etc. Once it is able to do that, we call it mathematical logic. Gödel’s incompleteness theorems apply specifically to mathematical logic.

And because this mathematical logic itself respects the classical principles (it is a type of classical logic), this means that Gödel is just telling us something about the limits of a certain type of classical logic (classical logic that is capable of expressing arithmetic). It is pointing out a limitation in mathematical logic. That is not itself a straightforwardly a reason to think that classical logic is not the correct logic, or that the ‘laws of logic’ are not true.

Except… it might be.

The strange thing about Gödel’s proof is that it shows that arithmetic, and any more complex bit of mathematics, cannot be modelled in classical logic without having ‘blind spots’, where there is something which is true but not provable in that logic. Yet, we might just think that we obviously can prove everything in arithmetic; we might just find the limits of proof in mathematical logic to be an unacceptable consequence. Well, if you did think this, then you could use this as a reason to think that there must be contradictions.

This is because the actual theorems can be thought of as ‘either-or’ statements. They can be thought of as saying ‘either mathematical logic is consistent but has blind-spots, or it has no blind-spots but it has some contradictions in it’ – Gödel is telling us that mathematical logic is either incomplete or inconsistent – either there is something that is true but not provable, or the law of non-contradiction is false.

If you thought that the price (of denying non-contradiction) was worth it so that you didn’t have any of these weird blind-spots in your proof-theory, then you might be willing to accept the inconsistent option. Most people find contradictions more troubling than blind-spots though, and so don’t go that route. But, that is probably the most direct sort of attack you could make from Gödel against classical logic.

If you were feeling charitable, you might think that this is the sort of challenge that IP had in mind. But he dropped off the bit of the quote from Wikipedia which specifically says that Gödel’s theorems are about mathematical logic, not all logic (or even all of classical logic). I find it hard to believe that he didn’t read the end of the sentence he quoted, so either he didn’t understand that the bit he left off is crucial to understand the theorems, or he is deliberately overstating their importance. Either way, it is not great.

Now, if you know a little bit about Gödel, then you might know that in addition to his incompleteness theorems, he is also well known for his completeness theorem. This showed that the basic (classical) first-order logic is actually complete, meaning that it definitely doesn’t have any of those weird blind-spots that the extended mathematical logic has. So without the extra axioms added to first-order logic, it is capable of proving all its own truths.

And this is where we see why leaving off that bit from the Wikipedia quote was so telling. The way IP tells it, the significance of Gödel’s incompleteness theorems is that logic ‘cannot prove all truths’, which sounds like a very profound, almost mystical insight into what people can know and what they can’t. But, in reality, Gödel’s incompleteness theorems only show that some types of logic cannot prove all of their own truths. Admittedly, it is a very important class of logical systems, as it is the ones that model mathematical logic, but it is not as widespread as IP makes out. And Gödel’s completeness theorem actually proves that there are other types of logic for which this is not the case. There are also many other famous completeness theorems in logic (such as Kripke’s celebrated completeness theorem for the modal logic S5, which wouldn’t be possible if IP was right about what Gödel’s incompleteness theorems said!).

IP summarises what he thinks Gödel showed us as follows:

“All Gödel did was show that we are limited in having a total proof of something, but even without Gödel that is intuitively obvious. Many things will only be 99% probably true. But absolute certainty will always be beyond our reach”.

In reality, the significance of Gödel’s incompleteness theorems is not at all intuitive. Almost nobody expected mathematical logic to be limited in the way he showed it was. IP seems to think that Gödel just used maths to show that we can never really know anything for certain. This is demonstrably a bad interpretation of Gödel, and IP clearly has no idea what Gödel really showed us.

On the other hand, I agree that there is no particularly compelling reason to give up classical logic due to Gödel’s incompleteness theorems. I don’t find the idea of accepting contradictions just to get around incompleteness of arithmetic to be persuasive. It’s just a pity that IP wasn’t able to explain what Gödel said, how that was relevant to classical logic, and how it doesn’t mean we should reject classical logic. It’s more a case of a stopped clock accidentally showing the right time.

4. G Spencer-Brown

In the next main bit (around 3:10), IP brings up a different philosopher (or mathematician, depending on how you look at it), G Spencer-Brown, and the section he takes up is from Spencer-Brown’s book, Laws of Form. Now, this is a very strange book on logic, and not within the mainstream work on logic that philosophers usually debate. That is not to say that it is not of any value, but just to be aware that it is already a weird reference. The bit of that book that IP seems to have read is merely the preface, so it is quite easy to check for yourself (just pages ix – xii).

Anyway, IP is going back to the 3rd premise of the argument, which is the idea that the Liar Proposition is neither true nor false. He seems to be saying that Spencer-Brown advocates a solution to the problem which avoids having to postulate that the proposition is neither true nor false. This is presumably done in order to rescue the ‘laws of logic’ from the attack, and to defend classical logic.

So, the thing about the liar proposition, i.e. “This proposition is false”, is that if you assume it has a truth-value (true or false), then it sort of switches that truth-value on you. To see that, assume it is true. That would mean that what it says is the case. But what it says is that it is false. So if it is true, then it is false. The same thing happens if we assume it is false. So, we might say that any input value gets transformed into its opposite output value; true goes to false, false goes to true.

And this feature, or something similar to it, is also seen in the following mathematical example that Spencer-Brown brings up in the preface to Laws of Form. So consider the following equation:

X = -1/X

If you try to solve the equation by assuming that X = 1 (i.e. if we substitute X for 1), then we get:

1 = -1/1

However, -1 divided by 1 equals -1 (because any number divided by 1 equals itself), so: -1/1 = -1. But that means that:

1 = -1/1 = -1

The ‘input’ of 1 gets turned into the ‘output’ of -1. If we try to solve the equation by assuming that X = -1, then we get the converse result (because any number divided by itself equals 1):

-1 = -1/-1 = 1

So the assumption of X = 1, results in an output of -1. And the assumption of X = -1 results in the output of 1. This is a bit like what is going on with the liar proposition if we think of 1 being like ‘true’, and -1 being like ‘false’. In both cases, the input value gets switched to the alternative value.

IP says that the ‘solution’ to this problem is to use an ‘imaginary number‘ i, which is √-1. What he means is that if we assume that X = i, then we get the following solution to the equation:

i = -1/i

Because is the square root of -1, it is already -1/i. So:

i = -1/i = i

Unlike when we assumed X was 1 or -1, where the output got switched, if we assume the input is i, then the output doesn’t get switched. Ok, got it.

The first thing to note here is that this sort of consideration is what motivated mathematicians to consider changing how they thought about mathematics. And not without some resistance. Descartes apparently used the term ‘imaginary’ as a derogatory term. Nevertheless, mathematicians were convinced that introducing imaginary numbers into their understanding of mathematics, despite being unintuitive to some extent, was warranted due to the utility that doing so brought about. What Spencer-Brown is pointing to is a reason for re-conceiving traditional mathematics.

How does this relate to the liar proposition? Unfortunately for IP, it doesn’t relate in the way he wants it to. Also, he says almost nothing about how this is supposed to relate to the liar’s paradox. He says something, it is not helpful. What he says is:

“The only problem is that we cannot epistemically understand the mathematical usage of i. And thus Gödel was proven right and not the absolute skeptic who doubts logic is true”.

Now, IP is obviously wandering off down the wrong path here. Clearly, IP finds imaginary numbers hard to think about, but it is not clear what that has to do with anything. His comment about Gödel betrays his poor grasp of his work as well. Because Spencer-Brown explained how to use i in an equation, that proves that Gödel was right? Hardly.

What is actually going on here, what IP seems unable to get, is that Spencer-Brown is not advocating for classical logic. In fact, he is quite out-there as a thinker, and proposing something quite radical. Let’s look at what Spencer-Brown says about the mathematical example that IP brought up, and how it relates to the liar paradox:

“Of course, as everybody knows, the [mathematical] paradox in this case is resolved by introducing a fourth class of number, called imaginary, so that we can say the roots of the equation above are ±i, where is a new kind of entity that consists of a square root of minus one.” (Spencer-Brown, Laws of Form, page xi, bold added by me)

Spencer-Brown is saying that the solution to the mathematical puzzle requires the addition of a “new kind of entity” to mathematics. A new kind of number. He then goes on in the next paragraph to explain how this mathematical lesson applies to logic:

“What we do in Chapter 11 is extend the concept to Boolean algebras, which means that a valid argument may contain not just three classes of statement, but four: true false, meaningless and imaginary.” (ibid)

So Spencer-Brown is playing around with a type of logic which has four truth-values, not two like classical logic has. This makes it a very exotic type of non-classical logic! IP doesn’t mention this passage, which clearly shows Spencer-Brown freely speculating on a type of logic which is very different from classical logic.

So, what we have here is an example of someone saying that the right way to solve the liars paradox is to modify classical logic in some fundamental way. IP seems to think that this example makes the point he wants to make, but if anything it points in the opposite direction completely. Far from showing that the laws of classical logic cannot be questioned, it is an example of someone questioning the laws of classical logic.

5. Conclusion

So far we have seen that IP has no real idea what the skeptical challenge to logic really consists in. He knows that sometimes people talk about reasons to doubt things like non-contradiction or the law of excluded middle, and he seems to take this to be a very radical attack on logic itself. However, we saw that he presented an argument that attempted to attack the claim that the laws of logic are true, and he hopelessly misunderstood it. It was showing that if the Liar Proposition is neither true nor false, then classical logic isn’t correct. In response, he proposed that “Easter is the best holiday” was neither true nor false, which is itself very poorly argued for, but even if it were correct would be another reason to reject classical logic. He then utterly failed to grasp Gödel, and may have deliberately misstated the theorem’s significance. Lastly, he looked at a passage from Spencer-Brown, but failed to see that if it was correct, it would be a reason to prefer a four-valued logic over the classical two-valued logic.

There is still another half of his video to go, and I will try to get round to debunking the claims made in that half as well when I get a chance.

The Fine-Tuning Argument and the Base Rate Fallacy.

0. Introduction

The Fine-Tuning Argument is used by many apologists, such as William Lane Craig. It is a common part of the contemporary apologetical repertoire. However, I argue that it provides no reason to think that the universe was designed. One does not need to look in too much detail about actual physics, and almost the whole set up can be conceded to the apologist. The objection is a version of the base-rate fallacy. From relatively simple considerations of the issue, it is clear that relevant variables are being left out of the equation which results in the overall probability being impossible to assess.

The Fine Tuning Argument starts with an observation about the values of various parameters in physics, such as the speed of light, the Plank constant and the mass of the electron, etc. The idea is that they are all delicately balanced, such that if one were to be changed by even a very small amount, this would radically alter the properties of the universe. Here is how Craig explains the point, in relation to the gravitational constant:

“If the gravitational constant had been out of tune by just one of these infinitesimally small increments, the universe would either have expanded and thinned out so rapidly that no stars could form and life couldn’t exist, or it would have collapsed back on itself with the same result: no stars, no planets, no life.” (Quote taken from here)

This phenomenon of ‘fine-tuning’ requires explanation, and Craig thinks that there are three possible types of explanation: necessity, chance or design.

Craig rules out necessity by saying:

“Is a life-prohibiting universe impossible? Far from it! It’s not only possible; it’s far more likely than a life-permitting universe. The constants and quantities are not determined by the laws of nature. There’s no reason or evidence to suggest that fine-tuning is necessary.” (ibid)

Chance is ruled out by the following:

“The probabilities involved are so ridiculously remote as to put the fine-tuning well beyond the reach of chance.” (ibid)

The only option that seems to be left on the table is design.

So the structure of the argument is as follows (where f = ‘There is fine-tuning’, n = ‘Fine-tuning is explained by necessity’, c = ‘Fine-tuning is explained by chance’, and d = ‘Fine tuning is explained by design’):

  1. f
  2. f → (n ∨ c ∨ d)
  3. ~n
  4. ~c
  5. Therefore, d.

1. Tuning

It seems from what we currently know about physics that there are about 20 parameters which are finely tuned in our universe (if the number is not exactly 20, this doesn’t matter – for what follows I will assume that it is 20). For the sake of clarity, let’s just consider one of these, and assume that it is a sort of range of values similar to a section of the real number line. This would make it somewhat like radio-wave frequencies. Then the ‘fine-tuning’ result that Craig is referring to has a nice analogy: our universe is a ‘radio station’ which broadcasts on only an extremely narrow range. This range is so narrow that if the dial were to be moved only a tiny amount, the coherence of the music that was being broadcast becomes nothing but white noise. That our universe is finely balanced like this is the result that has been gained from physics.

It is important to realise that this fine-tuning is logically compatible with there being other radio stations which one could ‘tune into’. Imagine I tune my radio into a frequency which is broadcasting some music, and that it is finely-tuned, so that if I were to nudge the dial even a tiny amount it would become white noise; from that it does not follow that there aren’t other radio stations I could tune into.

It is plausible (although I don’t know enough physics to know) that if one varied only one of the 20 or so parameters, such as gravity, to any extent (not just a small amount), but kept all the others fixed, then the result would be nothing other than white noise. Maybe, if you hold all 19 other values fixed, every other possible value for gravity results in noise. However, it doesn’t follow from this fact (if it is a fact at all) that there is no combination of all the values which results in a coherent structure. It might be that changing both gravity and the speed of light, and keeping all the others fixed, somehow results in a different, but equally coherent, universe.

In mathematics, a Lissajous figure is a graph of a system of parametric equations. These can be displayed on oscilloscopes, and lead to various rather beautiful patterns. Without going into any of the details (which are irrelevant), the point is that by varying the ratio of the two values (X and Y), one produces different patterns. Some combinations of values produce ordered geometrical structures, like lines or circles, while others produce what looks like a messy scribble. There are ‘pockets’ of order, which are divided by boundaries of ‘chaos’. This could be what the various combinations of values for the 20 physical parameters are like.

Fine-tuning says that immediately on either side of the precise values that these parameters have in our universe, there is ‘white noise’. But it does not say that there are no other combinations of values give rise to pockets of order just as complex as ours. It doesn’t say anything about that.

2. The problem of fine-tuning 

It might be replied that there could be a method for determining whether there are other pockets of order out there or if it is just white noise everywhere apart from these values, i.e. whether there are other radio stations than the one we are listening to or not. And maybe there is such a method in principle. However, it seems very unlikely that we have anything approaching it at the moment. And here the fineness of the fine-tuning turns back against the advocate of the fine-tuning argument. Here’s why it seems unlikely we will be able to establish this any time soon.

We are given numbers which are almost impossible to imagine for how unlikely the set of values we have would be if arrived at by chance. Craig suggests that if the gravitational constant were altered by one part in 10 to the 60th power (that’s 10 with 60 ‘0’s after it), then the universe as we know it would not exist. That’s a very big number. If each of the 20 parameters were this finely tuned, then each one would increase this number again by that amount. The mind recoils at how unlikely that is. This is part of the point of the argument, and why it seems like fine-tuning requires an explanation.

However, this is also a measure of how difficult it would be to find an alternative pocket of order in the sea of white noise. Imagine turning the dial of your radio trying to find a finely-tuned radio station, where if you turned the dial one part in 10 to the 60th power too far you would miss it. The chances are that you would roll right past it without realising it was there. This is Craig’s whole point. It would be very easy to scan through the frequency and miss it. But if you wanted to make the case that we had determined that there could be no other coherent combination of values to the parameters, you would have to be sure you had not accidentally scrolled past one of these pockets of coherence when you did whatever you did to rule them out. The scale of how fine the fine-tuning is also makes the prospect of being able to rule out other pockets of coherence in the sea of noise almost impossible to do. It would be like trying to find a needle in 10 to the 60th power of haystacks. Maybe there is a method of doing that, but it seems like an incredibly hard thing to do. The more the apologist adds numbers for the magnitude of fine-tuning, the more difficult it is to rule out there being other possible coherent combinations of values out there somewhere.

Thus, it seems like the prospects of discovering a fine-tuned pocket of coherence in the sea of white noise are extremely slim. But this just means that it seems almost impossible to be able to rule out the possibility that there is such additional a pocket of coherence hidden away somewhere.

Think about it from the other side. If things had gone differently, and the values of the parameters had been set differently, then there might be some weird type of alien trying to figure out if there were other pockets of coherence in the range of possible values for the parameters, and they would be extremely unlikely to find ours, precisely because ours (as Craig is so keen to express) is so delicately balanced. Thus the fine-tuning comes back to haunt the apologist here.

We have a pretty good understanding of what the values for the parameters are for our universe, although this is obviously the sort of thing that could (and probably will) change as our understanding deepens. But I do not think that we have a good understanding of what sort of universe would result throughout all the possible variations of values to the parameters. It is one thing to be able to say that immediately on either side of the values that our universe has there is white noise, and quite another to be able to say that there is no other pocket of coherence in the white noise anywhere.

The fine tuning result is like if you vote for party X, and your immediate neighbours on either side vote for party Y. You might be the only person in the whole country who votes for party X, but it doesn’t follow that this is the case just because you know that your neighbours didn’t.

If the above string of reasoning is correct, then for all the fine tuning result shows, there may be pockets of coherence all over the range of possible values for the parameters. There are loads of possible coherent Lissajous figures between the ‘scribbles’, and this might be how coherent universes are distributed against the white noise. There could be trillions of different combinations of values for the parameters which result in a sort of coherent universe, for all we know. And the magnitude of the numbers which the apologist wants to use to stress how unlikely it is that this very combination would come about by chance, is also a measure of how difficult it would be to find one if it were there.

3. The meaning of ‘life’

It seems that if the above reasoning is right, then other pockets of coherence are at least epistemically possible (i.e. possible for all we know). Let’s assume, just for simplicity, that there are at least some such alternative ways the parameters could be set which results in comparably stable and coherent universes as ours. Let’s also suppose that these are all as finely tuned as our universe is. For all we know, this is actually the case. But if it is the case, then it suggests a distinction between a universe is finely-tuned, and one that is fine-tuned for life. We might think that those other possible universes would be finely tuned, but not finely tuned for life because we could not exist in those universes. We are made of matter, which could not exist in those circumstances. It might be that something else which is somehow a bit like matter exists in those universes, but it would not be matter as we know it. Those places are entirely inhospitable to us.

 

But this doesn’t mean that they are not finely-tuned for life. It just means that they are not finely-tuned for us. The question we should really be addressing is whether anything living could exist in those universes.

Whether this is possible, of course, depends on precisely what we mean by ‘life’. This is obviously a contentious issue, but it seems to me that there are two very broad ways we could approach the issue, which are relevant for this discussion. Let’s call one ‘wide’ and one ‘narrow’.

Here is an example of a wide definition of ‘life’. For the sake of argument, let’s say that living things all have the following properties:

  • The capacity for growth
  • The capacity for reproduction
  • Some sort of functional interaction with their environment, possibly intentional

No doubt, there will be debate over the conditions that could be added, or removed, from this very partial and over-simplified list, and the details do not matter here. However, just note one thing about this list; none of these properties require the parameters listed in the usual presentations of the fine-tuning argument to take any particular value. So long an entity can grow, reproduce and interact with its environment, then it is living, regardless of whether it is made of atoms or some alien substance, such as schmatoms. Thus, on such a ‘wide’ definition of ‘life’, there is no a priori reason why ‘life’ could not exist in other universes, even if we couldn’t.

On the other hand, we might define ‘life’ in terms of something which is native to our universe, such as carbon molecules, or DNA. If, for example, the gravitational constant were even slightly different to how it is, then DNA could not exist. Thus, if life has to be made of DNA, then life could not exist in any pocket of coherence in the sea of white noise apart from ours.

So there are two ways of answering the question of whether an alternative set of values to the parameters which resulted in a coherent universe could support life – a wide and a narrow way. On the wide view the answer seems to be ‘yes’, and on the narrow view the answer is definitely ‘no’.

It seems to me that there is very little significance to the narrow answer. On that view, the universe is fine-tuned for life, but only because ‘life’ is defined in terms of something which is itself tied to the physical fine-tuning of the universe. The meaning of ‘life’ piggy-backs on the fine-tuning of the physical variables. And this makes it kind of uninteresting. The same reasoning means that the universe is fine-tuned for gold as well as life, because the meaning of ‘gold’ is also tied to specific things which exist only because of the values of the variables, i.e. atoms and nucleus’, etc. Thus, if we want to say ‘fine-tuned for life’ and have that mean something other than just ‘fine tuned’, then we should opt for the wide view, not the narrow one.

But then if we go for the wide view, we are faced with another completely unknown variable. Just as we have no idea how many other potential pockets of coherence there may be in the sea of white noise, we also have no idea how many of them could give rise to something which answers to a very wide definition of ‘life’. It might be that there are trillions of hidden pockets of coherence, and that they are all capable of giving rise to life. We just have no information about that whatsoever.

 

 

5. Back to the argument

What the preceding considerations show is that the usual arguments taken to rule out the ‘chance’ explanation are missing something very important to the equation. I completely concede that our universe is extremely finely-tuned, to the extent that Craig explains. This means that if the values of the parameters were changed even a tiny amount, then we could not exist. However, because we don’t have any idea whether other combinations of values to those parameters would result in coherent universes, which may contain ‘life’, we have no way of saying that the chances of a universe happening with life in it are small if the values of these parameters were determined randomly. It might be that in 50% of the combinations there is sufficient coherence for life to be possible. It might be 90% for all we know. Even if it were only 1%, that is not very unlikely. Things way less likely happen all the time. But the real point is that without knowing these extra details, the actual probability is simply impossible to assess. Merely considering how delicately balanced our universe is does not give us the full picture. Without the extra distributions (such as how many possible arrangements give rise to coherent universes, and how many of those give rise to life) we are completely in the dark about the overall picture.

This makes the argument an instance of the base-rate fallacy. The example on Wikipedia is the following:

“A group of police officers have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random, and force the driver to take a breathalyzer test. It indicates that the driver is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?”

Because the ‘base-rate’ of drunken drivers is far lower than the margin for error in the test, this means that if you are tested and found to be drunk, it is a lot more likely that you are in the group of ‘false-positives’ than not. There is only one drunk person in every 1000 tested, and (because of the 5% margin for error), there are 49.95 false positives. So the chances that you are a false positive is far greater than that you are the one actually drunk person. It’s actually 1 in 50.95, which is roughly a probability of 0.02. Thus, without the information of the base-rate, we could be fooled into thinking that there was a 0.95 chance that we had been tested correctly, whereas it is actually 0.02.

With the fine-tuning argument we have a somewhat similar situation. We know that our universe is very delicately balanced, and we know that we could not exist if things were even slightly different. But because we effectively lack the base-rate of how many other possible combinations of values give rise to different types of life, we have no idea how unlikely it is that some such situation suitable for life could have arisen, as it were, by chance. As the above example shows, this rate can massively swing the end result.

6. Conclusion

The fine-tuning of the universe is a fact. This does not show that the universe is fine-tuned for life though. It also does not show that the universe must have been designed. It is impossible to know what the chances are that this universe happened ‘by chance’, because we do not have any idea about the relevant base-rate of coherent and (widely defined) life-supporting universes there could be. Thus, we have no idea if we can rule out the chance hypothesis, because we have no idea what the chances are without the information about the base rate.

Logic and God’s Character

0. Introduction

Vern Poytress is professor of New Testament interpretation at Westminster Theological Seminary. He has a handy website, which he runs with John Frame, on which he has put a lot of his published work available for free. In particular, he has a copy of his book Logic: A God Centred Approach to the Foundation of Western ThoughtIn this post, I want to focus on a particular small section of the book, which is Chapter 7 (p. 62 – 68). The chapter is entitled ‘Logic Revealing God’, and in it Poytress addresses the question of whether logic is dependent on God, or if God is dependent on logic. As he says, “We seem to be on the horns of a dilemma” (p. 63).

I will go through the chapter quite closely, and it might be worth reading as it is not long (although I will provide plenty of quotes from the original). It is an instructive chapter because it highlights many of the key themes and ideas that we see presuppositionalists making in their positive arguments. It is also done by a professor in a theological seminary, with a very impressive resume, including a PhD in mathematics from Harvard, and a ThD in New Testament Studies from Stellenbosch, South Africa. Therefore, the presentation of the argument should be pretty strong. And I do think that the book is quite readable, and is packed full of great learning material for anyone wanting to study logic.

However, I think that the sections of the book which deal with the theological and metaphysical underpinnings of his view of logic, such as the one I will explore here, leave a lot to be desired. Hopefully, what I will say will be clear, and my criticisms will be justified.

  1. The Dilemma 

The dilemma that Poytress refers to is not spelled out explicitly, but it seems easily recoverable from what he does say. The opening line in the chapter is: “Is logic independent of God?” To start us off, it is quite natural to see logic as independent from the existence of human beings, as Poytress explains:

“Logic is independent of any particular human being and of humanity as a whole. If all human beings were to die, and Felix the cat were to survive, it would still be the case that Felix is a carnivore. The logic leading to this conclusion would still be valid … This hypothetical situation shows that logic is independent of humanity.” (p. 63)

The example that Poytress gives is slightly confusing, as the truth of the statement “Felix is a carnivore” does not seem to be merely a matter of logic, at least not a paradigmatic one. However, it is clear that the idea of independence that is in play involves the following sort of relation:

Independence X is independent of Y   iff   X would still exist even if Y did not exist

The logical relation he highlights (involving the cat) would hold even if people did not exist, and is thus independent from the existence of people. It follows that X is dependent on Y if and only if the independence condition above fails.

The cat example seems to be mixing up a few different things at the same time. The classification of Felix as a carnivore does not depend on the existence of humans, in that whether people exist or not will not change whether a cat eats meat or not. Yet this fact does not seem to be a purely logical fact, and so the independence that it establishes is not really of logic from the existence of human beings.

It seems to me that an example which makes the point he expresses with “logic is independent of any particular human being and of humanity as a whole,” would be the following. Consider the following inference:

  1. All men are mortal
  2. Socrates is a man
  3. Therefore, Socrates is mortal.

The conclusion follows from the premises, and it does so regardless of whether Socrates exists or not. As it happens, Socrates does not exist (any longer), but this does not make the inference any less valid than when he did exist. Even if Socrates turns out to have been entirely a fictional character who never existed at all, the inference is still valid.

And indeed, the conclusion follows from the premises, regardless of whether anyone exists or not; even if everyone were to die in a nuclear war tomorrow, the above inference would remain valid. Even if there had never been any people at all, the inference would remain valid. At least, that is the thought.

Part of the reason for this thought is that we do not need to refer to the existence of any particular thing when coming to determine whether an inference is valid. We consult what it is that actually determines the validity of the inference, and in doing so we do not have to check to see if any particular thing exists. And what it is that the validity of the inference depends on is something like one of the following candidate considerations:

  • An inference is valid if and only if it is possessing the correct logical form.
  • An inference is valid if and only if it is truth-preserving.

Exactly how we cash this out is contentious of course, but I take it that something like these sorts of example is going to be correct. In Aristotelian logic, for example, the forms Barbara and Celerant are simply given as valid (they are the so-called ‘perfect forms’), and so is any form which is transformable into either of one of the perfect forms via the conversion rules. Different logical systems have different conceptions of what the ‘correct logical form’ is, but one thing that seems obvious is that the existence or not of any particular person, or of humanity in general, is irrelevant to the question of whether a given inference is valid or not. It is a different type of consideration that is relevant.

But if this (or something like this) is what the validity of the inference depends on, then whether it is valid or not isn’t just independent from the existence of human beings, but is independent from the existence of any existing thing – including God.

Here is how Poytress explains this idea:

“Through the ages, philosophers are the ones who have done most of the reflection on logic. And philosophers have mostly thought that logic is just “there.” According to their thinking, it is an impersonal something. Their thinking then says that, if a personal God exists, or if multiple gods exist, as the Greek and Roman polytheists believed, these personal beings are subject to the laws of logic, as is everything else in the world. Logic is a kind of cold, Spockian ideal.” (p. 62)

As I have explained, it is not just that philosophers have postulated logic as being just there, without any motivation. There are reasons, like the independence considerations I outlined, for thinking that any given inference is valid or invalid independently from the existence of any particular thing. It follows from these considerations that logic is not itself dependent on any particular thing, and ‘just is’ (as Poytress puts it).

2. Conflict

As a Christian, such a conclusion brings Poytress into conflict with his core theological doctrines. As he explains:

“This view has the effect of making logic an absolute above God, to which God himself is subjected. This view in fact is radically antagonistic to the biblical idea that God is absolute and that everything else is radically subject to him: ‘The Lord has established his throne in the heavens, and his kingdom rules over all’ (Ps. 103:19).” (p. 62)

Thus, logic seems like it is independent of God, because it seems independent of the existence of anything, yet the doctrine of God being absolute (in Poytress’ sense) requires that everything is dependent on God. I take it that this is the dilemma that he faces:

  • On the one hand, logic is independent from the existence of God (as it seems independent from the existence of any entity whatsoever) but that compromises God’s absoluteness (God seems to be subordinate in some sense to logic).
  • On the other hand, logic is dependent on God, which restores the absoluteness of God, but then we are owed some kind of story about how it is that the validity of an argument depends on the existence of God.

This dilemma can be put as follows:

Is God dependent on logic, or is logic dependent on God?

Poytress takes the second horn, and part of his endeavour in the chapter is to bring out how it is that we see God in logic, how logic ‘reveals God’, as a way of bolstering the claim that logic depends on God.

As a first pass, he says:

“The Bible provides resources for moving beyond this apparent dilemma.” (p. 63)

He provides three examples, which are:

  1. “God is dependable and faithful in his character”
  2. “the Bible teaches the distinction between Creator and creature”
  3. “we as human beings are made in the image of God”

Let’s go through each of these and see what he has to say about each of them.

3. “God is dependable and faithful in his character”

With regards to 1, Poytress points to Exodus 34:6, which mentions that God is faithful, and he then explains:

“The constancy of God’s character provides an absolute basis for us to trust in his faithfulness to us. And this faithfulness includes logical consistency rather than illogicality. God “cannot deny himself” (2 Tim. 2:13). He always acts in accordance with who he is.” (p. 63)

It is not clear to me how this engages with our question, which was whether logic depends on God or God depends on logic. Poytress is identifying the faithfulness, logical consistency and inability to deny himself as three special properties that God has, but to me the possession of these properties is irrelevant to the question at hand. I will try to explain my worry with a thought experiment:

Imagine I were to build a robot. And let’s say that I build the robot in such a way that it could not knowingly lie. This would mean that I program it in such a way that it cannot provide any output which is the contradicts any of the stored data it has in its memory banks (or something like that). If so, then my robot would be analogous in some sense to this description of God. It is, in effect, programmed to be honest. Given that a robot cannot do anything which it is not programmed to do, I would be able to trust in its ‘faithfulness’, in that I could know for sure that any output it generates is consistent with its data banks. Arguably, a robot like this is also logically consistent by definition (assuming the programming is consistent), and because it cannot lie, it cannot deny itself in the relevant sense either. Thus, my robot is perfectly faithful, logically consistent and cannot deny itself. Yet, this would not establish that the validity of any given inference was dependent on the existence of the robot, however. And if not, then it is not clear why these properties being possessed by God would be relevant to establishing anything like the horn of the dilemma that Poytress is going for either.

Perhaps you have some niggling objection here. The robot case isn’t really analogous to God, you might be saying. And that is quite true. For instance, no matter how advanced, my robot wouldn’t be all-knowing. And no matter how reliable its programming is, its programming might become corrupted. Either of these indicate the possibility of some kind of error. Because of the possibility of error like this I shouldn’t trust what it tells me with 100% certainty, and this makes the two cases unalike.

However, seeing as this is just a thought experiment, imagine that (somehow) I were to make a robot which did know everything, and couldn’t have its programming corrupted. Would this mean that logic now became dependent on the existence of the robot? Would the validity of an inference now depend on the existence of this robot? I see no reason for thinking that making these imaginary improvements to my robot could possibly have this effect.

As far as I can understand, an entity’s reliability, faithfulness, or inability to self-deny, etc, can never be relevant for making its existence something upon which the validity of an inference depends. If Poytress has some reason for thinking that the possession of these properties by God makes him the thing whose existence the validity of an argument depends, he spends no time explaining them here.

There are a few options at this point.

  1. By possessing these qualities, my robot becomes a thing that the validity of an inference is dependent on.
  2. The possession of these properties by my robot does not qualify it for being the thing that validity depends on, but they are what qualifies God for this role.
  3. The possession of these properties are not what qualifies anything for this role.

The first option seems prima facie implausible, and at the very least we have been given no reason to think that it is true. The second one leaves unanswered why it is that these qualities make God suitable for the role and not the robot, and implies that there are actually additional criteria for playing the role in question which make the difference (i.e. there must be something about God other than the possession of these qualities which distinguishes him from the robot). The third option is that these qualities are not relevant. Unless there is an additional option I cannot see, it seems like Poytress has to go with option 2, and owes us an explanation of the additional criteria.

4. The Bible teaches the distinction between Creator and creature”

So much for the first point. Let’s move on to the second one, which is about the creator/creature distinction. Poytress says the following:

“God alone is Creator and Sovereign and Absolute. We are not. Everything God created is distinct from him. It is all subject to him. Therefore, logic is not a second absolute, over God or beside him. There is only one Absolute, God himself. Logic is in fact an aspect of his character, because it expresses the consistency of God and the faithfulness of God. Consistency and faithfulness belong to the character of God. We can say that they are attributes of God. God is who he is (Ex. 3:14), and what he is includes his consistency and faithfulness. There is nothing more ultimate than God. So God is the source for logic. The character of God includes his logicality.” (p. 63)

This quote can be split into two sections. The first consists of the first five sentences (ending with “There is only one Absolute, God himself”). The first section really just affirms the doctrine of God being absolute. God alone is absolute; we are not absolute; being absolute, everything is dependent on God, including logic. This much is no help resolving the apparent dilemma we were facing earlier. It is just restating one of the two things we are trying to reconcile, i.e. the absoluteness of God. The question is how to fit this idea, of God being absolute, with the intuitive idea that the validity of an inference seems to have nothing to do with the existence of any particular thing. Simply repeating that God is absolute (in contrast to humans) does not shed any light on this issue.

The second part of the quote wanders back into the issue brought up in the previous point, by talking about the faithful character of God, and thus still seems irrelevant. Even if “[c]onsistency and faithfulness belong to the character of God”, how is the validity of an inference dependent on his existence? We are none the wiser.

Poytress does say that God’s ‘logicality’ is included in his character. And it might be thought that this is relevant somehow. After all, we are talking about logic, and ‘logicality’ is the property of being logical. Surely that is the link.

Well, I think it would be a mistake to think that. In some sense, my robot was already logical. It’s ‘brain’ is just a computer, which processes inputs and produces outputs according to some set of rules (its programming). This is a logical process; computer programming is just applied logic. It seems we are in precisely the same position we were in before. We are still left with no reason to think that if this thing did not exist, that an otherwise valid inference would be invalid. Why does being logical mean that logic depends on you? The answer, it seems, is that it doesn’t.

5. “We as human beings are made in the image of God”

On to point three. Here, Poytress is pointing to the fact that we are made in God’s image:

“God has plans and purposes (Isa. 46:10–11). So do we, on our human level (James 4:13; Prov. 16:1). God has thoughts infinitely above ours (Isa. 55:8–9), but we may also have access to his thoughts when he reveals them: “How precious to me are your thoughts, O God!” (Ps. 139:17). We are privileged to think God’s thoughts after him. Our experience of thinking, reasoning, and forming arguments imitates God and reflects the mind of God. Our logic reflects God’s logic. Logic, then, is an aspect of God’s mind. Logic is universal among all human beings in all cultures, because there is only one God, and we are all made in the image of God.” (p. 64)

The idea seems to be as follows. God makes plans, and so do we, although we only make plans on a ‘human level’. God has thoughts, and so do we, although his thoughts are ‘infinitely above ours’. So in this way, we are similar to God, without being the same as God. We are creatures, whereas he is the creator, and our likeness is only imperfect (or ‘analogical’).

The relevant section is when he explains that “our logic reflects God’s logic”, which is because it is us ‘thinking Gods thoughts after him’, in a process which “reflects the mind of God”. Just like with the planning and thinking examples, our grasp of logic is only analogical, which means that we have an imperfect, creaturely understanding in comparison with God’s perfect understanding. Nevertheless, we imitate of God’s thought processes.

The problem with this view is that it invites a Euthyphro-style dilemma immediately. God thinks in a particular way (a logical way) and we are to think in the same sort of way (to imitate and reflect his way of thinking). But, why does God think in this particular way? More precisely, does God think in this way because it is logical way of thinking, or is it a logical way of thinking merely in virtue of it being the way that God thinks? This is just another way of asking the same question we started with, namely: is God dependent on logic or is logic dependent on God? All we have done here is to rephrase it in terms of God’s thinking; is logic dependent on God’s thinking, or is God’s thinking dependent on logic? And there is no reason to think that rephrasing it in this manner will itself constitute any sort of solution to the initial problem.

What Poytress is actually giving us is a reason for why we (should) think in a logical way. We should think in a logical way because that’s the way that God thinks. And, whatever the merits of this point are, this plainly isn’t relevant to the initial question about the relation between logic and God. The best that can be said about this idea is that it is an answer to a different question altogether.

5. Sidebar – Logical Euthryphro 

But it is also rather hopeless as a solution, when we try to run the argument to its logical conclusion. Remember, the first horn was that God thinks in this particular way because it is (independently from him thinking it) a logical way of thinking. Presumably, Poytress would find just as “radically antagonistic to the biblical idea that God is absolute” as the initial claim that God depends on logic. It really just is the same claim. It just says that logic is independent of God. So, he has to opt for the second horn, which is that this way of thinking is logical merely in virtue of being the way that God thinks.

However, there is a problem with this; it makes God’s decision to think in this way (rather than some other way) inexplicable. To sharpen up the discussion, let’s use some examples. We know that there are lots of different logical systems, including classical logic, extensions of classical logic and non-classical logics, etc. Just to take two examples, there is classical logic and intuitionistic logic. They have different fundamental principles, e.g. intuitionistic logic doesn’t have excluded middle as a general law and classical logic does. God thinks in one of these ways and not the other (presumably). Let’s say he thinks classically, and not intuitionistically. If we were to ask why he thinks in this classical way, as opposed to the intuitionistic way, the one thing we cannot say as an answer is that thinking classically is (independently of God thinking like that) the logical way to think. If we tried to say this, then we would in fact be asserting the first horn of the dilemma, which is “radically antagonistic to the biblical idea that God is absolute”.

But what else could possibly be the answer to this question? God thinks classically rather than intuitionistically because … ? It might be that God has a preference for classical logic rather than intuitionistic logic, but this preference itself cannot be based on the idea that classical logic is (independently of God thinking like that) the logical way to think, or we are right back to the initial horn again. So even if he has a preference for classical logic, it can only be based on some other type of consideration, and not that it is itself the logical way to think. But there is nothing else which could be relevant. He may prefer it because he finds it simpler than intuitionistic logic, or because he likes sound of the word ‘classical’, or because he flipped a coin and it landed heads-up rather than tails-up. But whatever the reason, it can only be something which is irrelevant. His reason can only be arbitrary (which just means that it is a decision made without relevant reason). The one thing which could be relevant is ruled out as being the first horn of the dilemma. And that is what is so pressing about this sort of Euthryphro dilemma.

So let’s say we take this horn. It means that if God thinks classically (rather than intuitionistically), and if we were to imitate the way that God thinks (as Poytress urges), then this would produce some kind of explanation for why we think classically rather than intuitionistically. However, because there is no (non-arbitrary) reason why God thinks classically rather than intuionistically, there is correspondingly no real reason why we do either.

Imagine you find me performing a series of actions, walking to and fro in my house, picking things up and putting them down again seemingly at random. If you ask me why I’m doing this, I might say that I have a reason for doing so. Maybe I say to you that these actions performed together will culminate in an effect which I desire. So, maybe I am building something, but I am in the early stages of doing so, just setting out my tools and clearing a space. To you it looks like a random set of actions, but it has a purpose. I have reasons for doing each of the things that I am doing. Maybe once I have explained my purpose, then the series of actions stops looking so random to you.

Now imagine that you come across me performing a series of actions which again seem random to you. You ask me why I am doing these things, and this time I point to the TV, where you see a figure who is performing the very same sorts of actions. I say that I am acting out this person’s actions after him, and reflecting his actions. ‘Well, why is he doing these particular actions?’, I ask. ‘Oh, no reason’, you reply.

I think that in this second situation, we would have to conclude that you are doing something which is different in type to the first example. There your actions had a reason behind them and were not arbitrary, whereas now, you are just mirroring the random actions of the figure on the TV. Really, your actions are just as random as his; there is no reason why you are doing one thing rather than another, because there is no reason why the figure on the TV is doing one thing rather than another. This is what happens if we follow through on the idea that a) we think logically because we are thinking God’s thoughts after him, and b) if logic is not independent of God. Poytress is committed to b), as the other option would be “radically antagonistic” to his idea of God, and he is also urging that we accept a) in the passage we just looked at. Thus, if we go where Poytress urges, we become like the person imitating the random actions of the figure on the TV.

But, surely, this is where God’s characteristics come into play? God is consistent, and faithful, and cannot deny himself. Surely this is relevant. He couldn’t think in an irrational way, because this would mean being inconsistent. In this way, his consistency grounds the type of logic he opts for.

This may seem like a promising rebuttal. However (no surprise), I don’t think it is. Intuitionism is consistent, and many people have found it to be rational. Michael Dummett, for example, argued strongly for intuitionism. It is not the case that someone who prefers intuitionism to classical logic is committed to any contradictions as a result (intuitionistic logic is not inconsistent). They are not necessarily going to deny themselves, or be irrational, or be ‘illogical’ (partly because they would advocate for intuitionism being the correct logic!). None of the considerations that Poytress presents give us any reason to think that God would have any real reason to prefer classical logic over intuitionism based off the character traits that he has identified.

It might even be the case that God likes paraconsistent, or even dialethic logic. If the principle of explosion really were invalid, then God would be dishonest to say that it was valid. If there really were a true contradiction somewhere (and who knows, maybe God has a morally sufficient reason to create one), then God would deny his own act of creation to say that there was not one. Thus, his honesty, truthfullness and consistency could be made to fit with there being contradictions. His characteristics could be retrofitted to be compatible with pretty much any outlandish logical or metaphysical proposal. And this is because they really just float free from, and are orthogonal to, the issues involved in the debates about non-classical logic.

6. Wrapping up

This post is already quite a lot longer than I had anticipated when I started, so I will finish up by briefly going through the final parts of the chapter we are looking at. Those are called ‘Attributes of God’, ‘Divine Attributes of Law’ and ‘The Power of Logic’. In them, Poytress makes the point that logic and God seem to share various properties:

Atemporality:

“If an argument is indeed valid, its validity holds for all times and all places. That is, its validity is omnipresent (in all places) and eternal (for all times). Logical validity has these two attributes that are classically attributed to God.” (p. 65)

Immutability:

“If a law for the validity of a syllogism holds for all times, we presuppose that it is the same law through all times … If a syllogism really does display valid reasoning, does it continue to be valid over time? The law— the law governing reasoning—does not change with time. It is immutable. Validity is unchangeable. Immutability is an attribute of God.” (p. 66)

Immaterial yet effective:

“Logic is essentially immaterial and invisible but is known through its effects. Likewise, God is essentially immaterial and invisible but he is known through his acts in the world.” (ibid)

True/truthful:

“If we are talking about the real laws, rather than possibly awed human formulations, the laws of logic are also absolutely, infallibly true. Truthfulness is also an attribute of God.” (ibid)

These properties initially do seem to be drawing a close similarity between logic and God. They seem to share a lot of properties together. And initially, this might seem to be reason to think that their doing so is significant. However, consider that the same case could be made for the rules of chess:

There is nothing in the laws of chess which refer to any times and places. If it is true that, according to the rules of chess, a pawn can move two spaces on its first move, then this is true if you play chess in Bulgaria, or in China, or on the moon. It’s truth is independent on location, which means that that rule, if true anywhere, is equally true everywhere else. But also, if we went back in a time machine to prehistoric times, and if we had taken a chess set with us, we would not have to consult the local tribe to see if they had a different set of rules for chess. It would still be true that a pawn can move two spaces on its first move, regardless of what year we are playing in. And this means that the rule’s applicability is independent of time. If it is true at one time, it is true at all times. The rules of chess, it seems, are omnipresent and atemporal as well.

Chess is also immutable. You might be thinking that chess used to be played differently. In the past, people had different rules for chess, so it isn’t immutable – chess has a history. Quite true, chess does have a history. But so does logic (trust me, I am editing a book about it). People have changed how they have thought about logical laws. For instance, the idea of existential import is present in Aristotelian logic, but not in classical logic. If we can sidestep this issue with logic, by saying that the historical development of logic is not relevant for undermining the claim that logic is immutable, then we can also do the same for chess.

The rules of chess are immaterial. We cannot touch them or measure them, etc. Yet they govern how actual pieces of material get moved about on actual chess boards. So the rules of chess are immaterial yet effective.

The rules of chess are true. It is true that a pawn can move two spaces on its first move. That is a truth.

So the rules of chess are omnipresent, atemporal, immutable, immaterial yet effective and true. Therefore, God thinks ‘chessly’? God’s nature reflects the rules of chess?

We could run the same sorts of considerations for any different (consistent) logical system, like Łukasiewicz’s three valued logic. It also has all the same sorts of properties. But can God think classically and also with three truth-values at the same time? Only an inconsistent God could do that. So if God thinks classically, rather than non-classically, there must be something about non-classical logics which means that their possession of the properties that Poytress identifies is not indicative of anything significant. Again, if there is something which makes this difference, we are not given it.

7. Conclusion

I have no doubt that Poytress is a very smart guy. I don’t have it in me to get a PhD in mathematics from Harvard. And he clearly understands logic very well. It is puzzling then that his discussions on the area I have focussed on in this post are so weak. There is really nothing he has said which helps make the case that logic is dependent on God, rather than being independent from God. I can only conclude that this part of his book was not thought through very well. The only other possibility is that he is so determined to fit together certain doctrines that he is unable to see that his arguments are weak in this area. I may look further at other aspects of the same book in later posts, but from what I have read of it so far, I don’t imagine he will change in any particularly significant way.

Does the scientific method rest on the fallacy of affirming the consequent?

0. Introduction

There have been some rather strange suggestions from certain apologists recently about the nature of the scientific method, such as here and here. Prime among the criticisms is the claim that the scientific method rests on a fallacy called ‘affirming the consequent’. However, this is a strange claim for various reasons. Firstly, the criticism doesn’t engage with how philosophers of science actually talk about the scientific method. From around 1960, with the work of Thomas Kuhn, attempts at summing up the scientific method in a simple inferential procedure have been largely abandoned. It is now widely taken in the philosophy of science that there is no one simple pattern of reasoning that completely captures the scientific method – a phenomenon known as the ‘demarcation problem‘. So there is no simple logical model of inference which completely covers the everything in the scientific method. But this means that there is no simple model of fallacious inference which completely characterises the scientific method either. In short, the scientific method is too complex to be reduced to a simple informal fallacy.

However, if we pretend that this Kuhnian sea-change had not taken place, then we would most naturally associate the scientific method with the notion of inductive inference, with evidence being given in support of hypotheses (or theories).  However, induction is not guilty of the fallacy of affirming the consequent, as I shall show here.

After this wander through induction, I will try to explain what the motivations are for the apologetical critique and how that misses the mark by failing to appreciate that scientific advances are often made through falsification rather than verification.

  1. Affirming the consequent

The fallacy of affirming the consequent is any argument of the following form:

  1. If p, then q
  2. q
  3. Therefore, p

The inference from the premises to the conclusion is invalid, because it could be that the premises are true and the conclusion is false. For example, if p is false and q is true, then the premises are true and the conclusion is false. If you want a proof of this, let me know and I will provide it in the comments.

The reason it is a fallacy to use affirming the consequent is just that the argument is deductively invalid. The lesson is this: if you have a true conditional, then you cannot derive the truth-value of the antecedent from the truth of the consequent.

2. Science affirming the consequent

The idea that the scientific method commits the fallacy above can be explained very easily. We might think that theories makes predictions. This could be thought of like a conditional, where the theory is the antecedent and the prediction is the consequent; if the theory is true, then something else should be true as well. So, take a scientific hypothesis (such as ‘evolution is true’, or whatever), and a prediction that the theory makes (‘there will be bones of ancient creatures buried in the ground’, etc). Here we have the conditional:

If evolution is true, then there will be bones of ancient creatures in the ground.

Now we make a measurement, let’s say by digging in the ground to see if there are any bones there, and let’s say we find some bones. So the consequent of our conditional is true. The claim by the apologists is that when a scientist uses this measurement as support for the hypothesis, they are committing the fallacy of affirming the consequent, as follows:

  1. If evolution is true, there will be bones of ancient creatures in the ground
  2. There are bones of ancient creatures in the ground.
  3. Therefore, evolution is true.

This is the sort of reasoning that is being alleged to be constitutive of the scientific method, and, as it is stated here, it is an example of affirming the consequent.

The problem with this line of thinking is not that it isn’t fallacious (it is clearly fallacious), but it’s that it is not what goes on in science.

3. Induction

In 1620, Francis Bacon published a work of philosophy called the ‘Novum Organon‘ (or ‘new tool’), in which he proposed a different type of methodology for science than the classical Aristotelian model that came before (Aristotle’s collected scientific and logical works had been collected together under the title ‘Organon‘). One way of characterising the Aristotelian method was that one does science by applying deductive syllogistic logic to ‘first principles’ (which are synthetic truths about the world). An example of this sort of first principle in Aristotelian physics might be that all things seek their natural place. It is of the nature of earth to seek to be down, and air to seek to be up, etc. This is, supposedly, why rocks fall to the ground, and why bubbles raise to the surface of water.

Part of Bacon’s dissatisfaction with this idea is that it provides no good way of discovering what the first principles themselves are; it just tells us what to do once you have them. Aristotle’s own ideas about how one discovers first principles are not entirely clear, but it seems that he thinks it is some kind of rational reflection on the nature of things which gets us this knowledge. Regardless, Bacon’s new method was intended to improve on just that, and is explicitly designed as a method for finding out what the features of the world actually are, of discovering these synthetic truths about the world. His precise version of it is a bit idiosyncratic, but essentially he advocated the method of induction.

Without going into the details of Bacon’s method, the idea is that he was making careful observations about the phenomenon he wanted to investigate, say the nature of heat, trying to find something that was common to all the examples of heat. After enough investigation the observation of a common element begins to be reasonably considered as not just a coincidence but as constitutive of the phenomenon under question. (He famously carried out just such an investigation into the nature of heat and concluded that it was ‘the expansive motion of parts’, which is actually pretty close to the modern understanding of it.)

In other words, starting with a limited number of observations of a trend, we move to the tentative conclusion that the trend is in fact indicative of a law. So the general pattern of reasoning would be that we move like this:

  1. All observed a‘s are G
  2. Therefore, all a‘s are G

The qualification of ‘all observed’ in premise 1 does most of the work in this argument. Obviously, just observing one a to be G would not count as much support for the conclusion. Technically, it would be ‘all observed’ a‘s, but it wouldn’t provide much reason to think that the conclusion is true. In order for the inductive inference to have any force, one must try to seek out a’s and carefully test them appropriately to see if they are always G’s. One must do an investigation.

So if we make a careful and concerted effort to investigate all a’s we can, and each a we come across happens to be G, then as the  cases increase, we will become increasingly confident that the next a will be G (because we are becoming increasingly confident that all a‘s are G). This is inductive inference.

With an inductive argument of this form, it has to be remembered that the conclusion does not follow from the premises with deductive certainty. Rather than establish the conclusion as a matter of logical consequence from the truth of the premises, an inductive argument makes a weaker claim; namely that the truth of the premises supports the truth of the conclusion; the truth of the premises provides a justification for thinking that the conclusion is true, but not a logically watertight one. Even the best inductive argument will always be one in which the truth of the premises is logically compatible with the falsity of the conclusion. The best one can hope for is that an inductive argument provides very strong support for its conclusion.

3. Induction affirming the consequent?

It is this inductive type of argument which the apologetical critique above is trying to address, it seems to me. They are saying that this type of scientific argument is really of the following form:

  1. If all a‘s are G, then all observed a’s will be G               (If p, then q)
  2. All observed a’s are G                                                         (q)
  3. Therefore, all a‘s are G.                                                      (Therefore, p)

Notice that the 2nd premise and the conclusion (2 and 3) is precisely the inductive argument from above; we have just added an additional premise (1), the conditional premise, onto the inductive argument. This fundamentally changes the form of the argument. Now the argument has the form of the deductively invalid argument ‘affirming the consequent’.

There are three problems with this as a critique of scientific inferences. Firstly, we have added a premise to an already deductively invalid argument, and shown that the result is deductively invalid, which is kind of obvious. Secondly, it characterises scientific inferences as a type of deductive inference, when there is good reason for thinking that they are not (at least if scientific inferences are supposed to discover synthetic truths about the world). Lastly, the addition of the first premise seems patently irrational, and obviously a perversion of normal inductive arguments. Let’s expand on each of these three problems:

Firstly

All the apologetical critique has demonstrated is that one can make a fallacious deductive argument by adding premises to an inductive argument. However, inductive arguments are already deductively invalid. There is a fallacy called the inductive fallacy. It consists of taking an inductive inference to be deductively valid. So if you thought that all observed swans being white logically entailed that all swans are white, then you have committed the inductive fallacy, because you would have mistaken the relation between the premise and the conclusion to be one of deductive validity, when it is merely that of inferential support. All observed swans being white does provide some reason to think that they are all white, but the fallacy is in thinking that it alone is sufficient to establish with certainty that they are all white.

The addition of the first premise does nothing to undermine an inductive inference. It doesn’t make it more fallacious than it was in the first place. In a sense, this analysis commits the essence of the inductive fallacy, in that it says that scientific inferences are deductive when they are not; the claim that scientific inferences are guilty of affirming the consequent is itself an instance of the inductive fallacy.

Secondly

We could, if we wanted to, add premises to an inductive argument to make it deductively valid, as follows:

  1. If all observed a’s are G,  then all a‘s are G        (If p, then q)
  2. All observed a’s are G                                             (p)
  3. Therefore, all a‘s are G.                                          (q)

Now the addition of the first premise has made the argument deductively valid, as it is just an instance of modus ponens.

The apologists were reconstructing scientific inferences as fallacious deductive arguments. Yet, even if we patched up the argument, as above with a deductively valid version of the inference, we still face a problem. This is that now we have a deductive argument, just like with Aristotle’s methodology. The very same reasons would remain for rejecting it, namely that as a methodology it provides no new synthetic truths; it only tells you what follows from purported first principles, not what the first principles are. We would be back to Aristotle’s dubious idea of introspecting to discover them. Thus, it isn’t desirable in principle to reconstruct an inductive argument as a deductive argument – even if the result is deductively valid. This means that the claim, that scientific reasoning is a failed attempt at being deductively valid, is implausible; even if scientific reasoning succeeded in being deductively valid that would be no help. The lesson is that they are a different type of inference, not to be judged by wether they are deductively valid or not.

Thirdly

Our original inductive argument went from the premise about what had been observed to what had not been observed. The whole point of inductive arguments is to expand our knowledge of the world, and so this movement from the observed to the unobserved is crucial. It is essentially of the form:

The observed a‘s are G ⇒ All a‘s are G

However, the first premise of the affirming the consequent reconstruction gets this direction of travel the wrong way round. They have it as:

All a‘s are G ⇒ the observed a‘s are G

If we keep clearly in mind that the objective of the scientific inference is to expand our knowledge, the idea of starting with the set (all a‘s) and moving to the subset (the observed a‘s) is weird. How could it expand our knowledge to do so? It is an inward move. This conditional though has been added to an inductive inference by our apologetical friends as a way of forming the ‘affirming the consequent’ fallacy out of an inductive inference.  But given that it gets the direction of travel exactly backwards, why on Earth would anyone ever accept this as a legitimate characterisation of a scientific pattern of reasoning?

This last concern highlights the cynicism inherent in the affirming the consequent critique. It isn’t a way of honestly critiquing a problem in science, but just an instance of gerrymandering an inductive inference, i.e. the change has been made just for the purposes of making the inference look bad, rather than as a way of highlighting a genuine issue. There is no independent reason for adding it on.

4. Or is it?

It might be claimed that I am pushing this objection too far. After all, there is reason to add the conditional premise on to the inductive inference. This is because theories make predictions. If a theory is true, then the world will have certain properties. And we do find examples of experiments being done in which the positive test result is used as a way of confirming the theory. And if this is right, then it looks like a conditional, and we are saying that the antecedent is true because the consequent is. So are we not back at the original motivation for the affirming the consequent critique?

Well, no. We are not. Here’s why. Let’s take an example. The textbook example. In Einstein’s general relativity, one of the many differences with classical Newtonian physics is that gravity curves spacetime. That means that there would be observable differences between the two theories. One such situation is when the light from a star which should be hidden behind the sun is bent round in such a way as to be visable from Earth:

aaaaaaaaaskkklk.JPG

We already knew enough about the positions of the stars to be able to predict where a given start would be on the Newtonian picture, and the details of the Einsteinian theory provided ways to calculate where the star would be on that model. So, the Newtonian model said the star would be in position X, and the Ensteinian theory said it would be in position Y.

These experiments were actually done, and the result was that the stars were measured to be where Einstein’s theory predicted, and not where Newton’s theory predicted.

Is this an example of the affirming the consequent fallacy? It might look like it. After all, it may well look like we were making this sort of argument:

  1. If general relativity is correct, then the star will be at X  (if p, then q)
  2. The star is at X                                                                            (q)
  3. Therefore, general relativity is correct.                                (p)

However, the real development was not that general relativity was confirmed when these measurements were made, but that Newtonian physics was falsified. Corresponding to the above argument we have a different one:

  1. If Newtonian physics is correct, then the star will be at Y (If p, then q)
  2. The star is not at Y                                                                       (~q)
  3. Therefore, Newtonian physics is not correct.                        (~p)

The first argument is a logically invalid deductive argument; it is affirming the consequent. But the second argument is just modus tollens (If p, then q; ~q; therefore, ~p), and that is deductively valid.

What we learned with the measurement of light bending round the sun was not that general relativity was true as such, but that Newtonian physics, and any theory relevantly similar to it, was false. General relativity may still be false, for all the experiment showed us, but it showed us that whatever theory it is that does correctly describe the physics of our universe is going to be more along the lines of general relativity than Newtonian physics. We learned something about the world, even if we did not confirm with complete certainty that relativity was true. And this is what scientific progress is like.

5. Conclusion

It would be affirming the consequent if someone thought that the positive measurement deductively entailed that general relativity was true. If any scientist has gone that far, then they are mistaken. It doesn’t mean that the scientific method itself is mistaken however.

Induction, God and begging the question

0. Introduction

I recently listened to a discussion during which an apologist advanced a particular argument about the problem of induction. It was being used as part of a dialectic in which an apologist was pinning a sceptic on the topic of induction. The claim being advanced was that inductive inferences are instances of the informal fallacy ‘begging the question’, and thus irrational. This was being said in an attempt to get the sceptic to back down from the claim that induction was justified.

However, the apologist’s claim was a mistake; it was a mistake to call inductive inferences instances of begging the question. Unwrapping the error is instructive in seeing how the argument ends up when repaired. I argue that the apologetic technique used here is unsuccessful, when taken to its logical conclusion.

  1. Induction

Broadly speaking, the problem of induction is how to provide a general justification for inferences of the type:

All observed a’s are F.

Therefore, all a’s are F.

This sort of inference is not deductively valid; there are cases where the conclusion is false even though the premises are true. So, why do we think these are good arguments to use if they are deductively invalid? How do we justify using inductive inferences?

Usually, when we justify a claim, we either present some kind of deductive argument, or we provide some kind of evidential material. These are each provided because they raise the probability of the claim being true. So if I say that lead pipes are dangerous, I could either provide an argument (along the lines of ‘Ingesting lead is dangerous, lead pipes cause people to ingest lead, therefore lead pipes are dangerous’), or I could appeal to some evidence (such as the number of people who die of lead poisoning in houses with lead pipes), etc.

Given this framework, when we are attempting to justify the general use of inductive inferences, we can either provide a deductive justification (i.e. an argument) or an inductive justification (i.e. some evidence).

A deductive justification would be an argument which showed that inductive inference was in some sense reliable. But with any given inductive inference, the premises are always logically compatible with the negation of their conclusion. With any given inference, there is no a priori deductive argument which could ever show that the inference leads from true premises to true conclusion. You cannot tell just by thinking about it a priori that bread will nourish you or that water will drown you, etc. No inductive inference can be known a priori to be truth preserving. Thus, there can be no hope of a deductive justification for induction.

Let’s abandon trying to find a deductive justification. All that is left is an inductive justification. Any inductive inferences in support of inductive inference in general is bound to end up begging the question. Let’s go through the steps.

Imagine you are asked why it is that you think it is that inductive inferences are often rational things to make. You might want to reply that they are justified because they have worked in the past; after all, you might say, inductive inferences got human kind to the moon and back. The idea is that induction’s success is some evidential support for induction.

However, this is not so, and we should not be impressed by induction’s track record. In fact, it is a red herring, for suppose (even though it is an overly generous simplification) that every past instance of any inductive inference made by anyone ever went from true premises to a true conclusion, i.e. that induction had a perfectly truth-preserving track record. Even if the track record of induction was perfect like this, we would still not be able to appeal to this as a justification for my next inductive inference without begging the question. If we did, then we would be making an inductive inference from the set of all past inductions (which we suppose for the sake of argument to be perfectly truth-preserving) to the next future induction (and the claim that it is also truth-preserving). However, moving from the set of past inductive inferences to the next one is just the sort of thing we are trying to justify in the first place, i.e. an inductive inference. It is just a generalisation from a set of observed cases to unobserved cases. To assume that we can make this move is to assume that induction is justified already.

So if someone offers the (even perfect) past success of induction as justification for inductive inferences in general, then this person is assuming that it is justified to use induction when they make their argument. Yet, the justification of this sort of move is what the argument is supposed to be establishing. Thus, the person arguing in this way is assuming the truth of their conclusion in their argument, and this is to beg the question.

Thus, even in the most generous circumstances imaginable, where induction has a perfect track record, there can be no non-question begging inductive justification for future inductive inferences.

2. Does induction beg the question?

We have seen above that when trying to provide a justification for induction, there can be no deductive justification, and no non-question begging inductive justification. Does this mean that inductive inferences themselves beg the question? The answer to that question is quite clearly: no.

Inductive inferences are an instance of an informal fallacy, and that fallacy is called (not surprisingly): the fallacy of induction. The fallacy is in treating inductive arguments like deductive arguments. The irrationality that is being criticised by the fallacy of induction is the irrationality of supposing that because ‘All observed a‘s are F’ is true, this means that ‘All a‘s are F’ is true. Making that move is a fallacy.

Begging the question is when an argument is such that the truth of the conclusion is assumed in the premises. Inductive inferences do not assume the truth of the conclusion in the premises. For example, when you decide to get into a commercial plane and fly off on holiday somewhere, you are making an inductive inference. This is the inference from all the safe flights that have happened in the past, to the fact that this flight will be safe. The premise is that most flights in the past have been safe. Because (as an inductive argument) the premise is logically compatible with the falsity of its conclusion, the premise clearly does not assume that the next flight will be safe, and so the argument does not beg the question.

In fact, this actually shows that no argument can be both a) an inductive argument and  b) guilty of the fallacy of begging the question. So technically, the claim apologists that inductive inferences beg the question is provably false.

Of course, if we tried to justify induction in general by pointing to the past success of induction, that would be begging the question. But to justify the claim that the next flight will be safe by pointing out the previous record of safe flights is not begging the question, it is just an inductive inference.

So the apologist who made the claim that induction begs the question is just wrong about that. He was getting confused by the fact that justifying induction inductively is begging the question. But when we keep the two things clear, it is obvious that inductive inferences themselves do not, and indeed cannot, beg the question.

3. But what if it did?

Induction does not beg the question. That much is pretty clear. But what would be the case if induction was guilty of some other fallacy? Well, if each inductive inference itself was an instance of, say, a fallacy like circular reasoning (like begging the question) then it would mean that people act irrationally when they make inductions, like deciding it is safe to fly on a plane. Yet, it seems like people are not irrational when they make decisions like this. Sure, there are irrational inductive inferences, like that from the fact that the last randomly selected card was red that the next card will be red. But not all inductive inferences are like this, such as the plane example. So the person who wants to claim that inductive inferences are circular has to say something which explains this distinction between the paradigmatic rational inference (like flying) and less rational (or irrational) inductive inferences. Saying that they are all circular would leave no room to distinguish between the good and bad inductive inferences.

So the apologist owes us something about how it is that we can make apparently irrational inductive inferences which seem otherwise perfectly rational. In response to this, they could make the radical move and reject inductive inferences altogether. This would mean that they have doubled down on the claim that induction is circular; ‘Yes, it is circular’, they will say, ‘throw the whole lot out!’.

Yet they are unlikely to make this move. Each day, everyone makes inductive inferences all the time. Every time you take a breath of air, or a drink of water, you are inductively inferring about what will result from the previous experiences you had about those activities. You are inductively inferring that water will quench your thirst because it has done so in the past. So if the apologist wants to reject induction altogether then he must not also rely on it like this, or else be hypocritical.

More likely than outright rejection, they will try to maintain that although induction is irrational in some sense, it can still be done rationally nonetheless. After all, there is a big difference between inferring that the next plane will land safely, or that the next glass of water will nourish, than that the next card will be red. The former are well supported by the evidence, whereas the latter is not. This is what allows us to distinguish between rational and irrational inductive inferences. Not all inductive inferences are on par; some have lots of good evidence backing them up, and some have none.

So, if the apologist wants to maintain that all inductive inferences are guilty of begging the question, then (assuming they don’t deny the rationality of all induction) they would still owe us an account of what makes the difference between a rational inductive inference and an irrational inductive inference. And the account would have to be something along the evidential lines I have just sketched above. How else does one figure out what inductive inferences are rational and which are not, if not by appeal to the evidence? If some new fruit were discovered, you would not want to be the first person to try it for fear of it being poisonous. But if you see 100 people eat of the fruit without dying,  you would begin to feel confident that it wasn’t poisonous. This is perfectly rational. Thus, even if the apologist’s claim were correct, if they do not want to reject induction altogether, they end up in the same situation as the atheists, having to distinguish between good and bad inductive inferences based on the available evidence in support of them.

Even if the charge of irrationality stood (which it does not), it would have to be relegated to the status of not actually playing any role in distinguishing good inductive inferences from bad ones. This strongly discharges any of the real force of the point that was trying to be made.

The claim of the irrationality of induction was not true, but in a sense, it doesn’t make any material difference even if it is true; we still need to distinguish the better inductions from the worse ones.

4. Justifying induction with God

Some theists suggest that they have an answer to this problem which is not available to an atheist. The idea is that through his revelation to us, God has communicated that he will maintain the uniformity of nature. Given this metaphysical guarantee of uniformity, inductive inferences can be deductively justified. When we reason from the set of all observed a‘s being F to all a‘s being F, we are projecting a uniformity from the observed into the unobserved. Yet we were unable to justify making this projection. The theist’s answer is that God guarantees the projection.

We may initially suspect foul play here. After all, how do we know that God will keep his word? It does not seem to be a logical truth that because God has promised to do X, that he will do X. It is logically possible for anyone to promise something and not do it. Thus, it seems like we have just another inductive inference. We are saying that because God has always kept his promise up till now, he will continue to do so in the future. The best we can get out of this is an inductive justification for induction, which is just as question begging as the atheist version of appealing to the past success of induction. I think this objection is decisive. However, let’s suspend this objection for the time being. Even if somehow we could get around this, maybe by saying that it is a necessary truth that God will not break his promise or something, I say that even then we have an insurmountable problem.

5. Why that doesn’t help

The problem now is that while God may have plausibly promised to maintain uniformity of nature, he has not revealed to us precisely which inductive inferences are the right ones; i.e. the ones which are tracking the uniformity he maintains, as opposed to those which are not. God’s maintaining the uniformity of nature does not guarantee that inductive inferences are suddenly truth-preserving. Even if it were true, it did not stop the turkey making the unsuccessful inference that he would get fed tomorrow on Christmas eve, and it did not stop those people who boarded that plane which ended up crashing. Even if God has maintained uniformity of nature, and even if he has revealed that he has done so to us in such a way that we can be certain about it, we are still totally in the dark about which inductive inferences we can successfully make.

So let’s suppose we live in a world where God maintains the uniformity of nature, and that he has told us that he does so. When faced with a prospective inductive inference, and trying to decide whether it is more rational (like the plane ride) or irrational (like the card colour) to make the inference, what could we appeal to in order to help us make the distinction? We cannot appeal to God’s word, as nowhere in the bible is there a comprehensive list of potential inductive inferences which would be guaranteed to be successful if made (which would be tantamount to a full description of the laws of nature). Priests were not able to consult the bible to determine which inductive inferences to make when the plague was sweeping through medieval Europe. They continued to be unaware of what actions of theirs were risky (and would lead to death) and which ones were safe (and would lead to them surviving). The only way to make the distinction between good inductive inferences and less good ones is by looking at the evidence for them out there in the world. Knowing that God has guaranteed some regularity or other is no help if you don’t know which regularity he has guaranteed.

The problem is that we are unable to determine, based only on a limited sample size, whether any inductive generalisation we make is actually catching on to a uniformity of nature, or whether it was just latching on to a coincidence. When Europeans reasoned from the fact that all observed swans were white to the conclusion that all swans were white, they thought that they had discovered a uniformity of nature; namely the colour of swans. They didn’t know that in Australia there were black swans. And this sort of worry is going to be present in each and every inductive inference we can make, even if we postulate that we live in a world where God maintains the uniformity of nature and has revealed that to us. The problem is primarily epistemological; how can we know which inductive inference is truth-preserving? The apologist’s answer is metaphysical; God guarantees that some inductive inferences are truth-preserving (i.e. the ones which track his uniformities). For the apologist’s claim to be of any help, it would have to be God revealing to us not just that he will maintain the uniformity of nature, but which purported set of observations are generalisable (i.e. which ones connect to a genuine uniformity). Unless you know that God has made the whiteness of swans a uniformity of nature, you cannot know if your induction from all the observed cases to all cases is truth-preserving. And God does not reveal to us which inductive inferences are correct (otherwise Christians would be have a full theory of physics).

In short, even if we go all the way down the road laid out by the apologist, they still have all the same issues that atheists (or just people of any persuasion who disagree with the theist’s argument laid out here) do. They have no option but to use the very same evidential tools that atheists (etc) do to make the distinction between the more rational and less rational inductive inferences.

6. Conclusion

The apologist’s claim was that inductive inferences were question begging. I showed that this is not the case (and that in fact it could not be the case). Then I went on to see what would be at stake if the apologist had scored a point. We saw that still the apologist would need to distinguish better and worse inductive inferences, just like the atheist, and would have no other option but to use evidence to make this case. Then we looked at the idea that God guarantees that there would be some uniformity of nature. We saw that this claim does not make any material difference to the status of inductive inferences, and so cannot be seen to be a justification of induction in any real sense.