The Logical Form of the Grim Reaper Paradox

[Edit: it turns out that something quite similar to this is argued for in this paper by Nicholas Shackel]

0. Introduction

The Grim Reaper Paradox (GRP) comes in various different forms. Sometimes it is about the divisibility of time, and sometimes it is about whether the past (or future) is finite. Even when we fix on which of these issues it is aimed at, there are also lots of different ways it can be cashed out. It can be reapers swinging their scythes, or placing point particles on planes, etc. Much of the discussion can be on how these details are to be understood.

Here I want to highlight what the GRP is at the most abstract level. It might be that once we think about it from this rarefied perspective, without the complications about what exactly the details are supposed to be, that we can see the paradox more clearly.

  1. The Schema

Any GRP has a logical form, which I shall refer to as the schema. Let’s just have as a toy example, the following:

The past has no beginning. There is an eternal machine such that each day at midnight, it checks to see if it has printed out anything yet from its printer. If it has, then it hibernates for the rest of the day. If it has not printed anything out yet, it immediately prints out the date and then hibernates for the rest of the day.

This is enough to generate our paradox. If it had not already printed anything out, this means that yesterday it would have run the same check and printed out the date. So it can’t be that the machine finds nothing printed out today. But that applies also to yesterday too, and every previous day. So although it can’t be that no date is printed out, no date could be printed on the paper.

The way to conceptualise this abstractly is as follows. There is a rule that characterises this example (and all the others). It is a universal condition that applies at all times. That condition says that some proposition p (which might be that a reaper kills Fred, or places a point particle on a plane, or that a machine prints out a date, etc) happens at a time if and only if p does not happen at any earlier time:

For all t (p at t iff for all t’ (if t'<t, then ~p at t’))

It says that p is true at t if and only if p is not true at any earlier time.

The schema on its own is not unsatisfiable. That is to say, if there are only finitely many times, then the schema can be true. In particular, the schema is true if there are only finitely many times and p is true at the first time. At that first time, p is true at t, and on the other side of the biconditional, the nested conditional (if t'<t, then ~p at t) has a false antecedent, and as it is in the scope of a universal quantifier it is vacuously true. So both sides of the schema are true. At all other times, p is not true at t (so the left side is false), and on the other side of the biconditional we have a condition that says that ~p is true at all earlier times, which is false because (as we just went through), p is true at the first time. So in all cases, the biconditional holds.

But if there is no first time, then we run into the familiar problem. If p is true at some time t, then the right side of the biconditional says that no earlier time, t’, could have p true at t’. But then take t’. It is also the case of that time that no earlier time, t”, has p true at it either. So given the left side of the biconditional, p is true at t’. Contradiction.

If p is not true at any time, then the left side of the condition is false for some arbitrary time t. But if p is not true at any time, then its not true at any time t’ earlier than t, which makes the right side of the biconditional true, which in turn implies p is true at t. Contradiction.

2. Conclusion

Now we have a purely logical version of the argument, freed from any distractions about reapers, or point particles, or eternal machines. The GRP really just says:

  1. There is no first time t
  2. For all t (p at t iff for all t’ (if t'<t, then ~p at t’))

As we have just seen, you can’t have both of these together. That is the GRP.

 

Counting forever

0. Introduction

Here I just want to explain a simple point which comes up in the discussion of whether it is possible to ‘count to infinity’, and what that tells us about whether time must have had a beginning. Wes Morriston deserves the credit for explaining this to me properly. All I’m doing is showing two places where his point applies.

  1. The targets

I have in mind two contemporary bits of philosophical literature. One is found in Andrew Loke’s work, specifically his 2014 paper, p. 74-75, and his 2017 book, p. 68. The other is found in Jacobus Erasmus’ 2018 book, p. 114. In each case, the authors are arguing that it is not possible to count to infinity because no matter how high one counts, no matter which number one counts to, there are always more numbers left to count. Here is how they express this point.

Firstly, here is Loke in his paper:

“If someone (say, George) begins with 0 at t0 and counting 1, 2, 3, 4, … at t1, t2, t3, t4, … would he count an actual infinite at any point in time? … The answer to the question is ‘No’, for no matter what number George counts to, there is still more elements of an actual infinite set to be counted: if George counts 100,000 at t100,000, he can still count one more (100,001); if he counts 100,000,000 at t100,000,000, he can still count one more (100,000,001).” (Loke, 2014, p. 74-75)

Secondly, here is Loke in his book:

“Suppose George begins to exist at t0, he has a child at t1 who is the first generation of his descendants, a grandchild at t2 who is the second generation, a great-grandchild at t3 who is the third generation, and so on. The number of generations and durations can increase with time, but there can never be an actual infinite number of them at any time, for no matter how many of these there are at any time, there can still be more: If there are 1000 generations at t1000, there can still be more (say 1001 at t1001); If there are 100,000 generations at t100,000, there can still be more (100,001 at t100,001), etc.” (Loke, 2017, p. 68)

Finally, here is Erasmus in his book:

“Consider, for example, someone trying to count through all the natural numbers one per second (i.e. 1, 2, 3, . . . ). Can the person count through the entire collection of numbers one per second? Clearly not, for no matter how many numbers the person has counted, there will always be an infinite number of numbers still to be counted (i.e. for any number n that one counts, there will always be another number n + 1 yet to be counted). Therefore, it is impossible to traverse an actually infinite sequence of congruent events and, thus, if the universe did not come into existence, the present event could not occur.” (Erasmus, 2018, p. 114)

What each is saying is that if someone starts counting now, they will never finish counting. And this is true, of course.

Think about the following mountain: it has a base camp at the bottom, but it is infinitely tall and has no highest point (for each point on the mountain, there is another one which is higher than that point). Can one start at the bottom and climb to the top of such a mountain? No, because there is no top of such a mountain.

(My dispute with Craig was not over exactly this point, but on something slightly more subtle. That was whether the following is false:

A) It is possible that George will count infinitely many numbers.

I say that A is true. All Loke and Erasmus’ sorts of considerations get you is to say that the following is false:

B) It is possible that George will have counted infinitely many numbers.

But we can leave this point here for now.)

2. My point

All I want to highlight today is that the fact that there are ‘always more numbers left to count’ only applies to certain types of infinite count. Imagine the following three scenarios:

i) George is trying to count the positive integers, in this order: (1, 2, 3, …)

ii) George is trying to count the negative integers, in this order: (…, -3, -2, -1)

iii) George is trying to count all the negative and all the positive integers, in this order: (…, -3, -2, -1, 0, 1, 2, 3, …)

We can think of the scenarios like this:

  • Scenario i) is like climbing up an infinitely tall mountain that has a bottom but no top;
  • Scenario ii) is like climbing up an infinitely tall mountain that has no bottom but does have a top;
  • Scenario iii) is like climbing up an infinitely tall mountain with no bottom and no top.

In each case, due to the nature of infinite sets, the tasks involve counting the same amount of numbers. Simple intuition tells you that the third involves counting more numbers than the first two (and should be the sum of the first two). However, it is actually the case that each scenario involves counting sets with the same cardinality (that is, ℵ0). Put another way, each mountain is the same height as the other two.

The key point I want to make is this. It is obviously true that the Loke-Erasmus observation (that “there will always be an infinite number of numbers still to be counted”) only applies to scenarios i) and iii). It just doesn’t apply to scenario ii).

When George is starting at 0 and counting up, he always has the same amount of numbers left to count (always ℵ0-many). The same is true for when he is in scenario iii), no matter where George is along his task.

But if he is in scenario ii) he is in a very different situation. No matter where George is in his task in this scenario, he only ever has finitely many numbers left to count. He doesn’t have infinitely more mountain to climb in scenario ii). It has a top, and no matter where he is, George is only finitely far away from the top of the mountain. Clearly, he can reach the top of such a mountain if he is already some way along his climb.

And here is where the rubber meets the road. If there is a problem with scenario ii), it is not that “there will always be an infinite number of numbers still to be counted”, because that is just false. And scenario ii) is the one that Loke and Erasmus ultimately have in their sights. That’s because it is the one where George ‘completes’ an infinite task by counting an actually infinite amount of numbers.

Put simply, the argument they are making looks like this:

  1. George cannot count up to infinity, because there would always be more numbers left for him to count.
  2. Therefore George cannot count down from infinity.

Put like this, the fact that the argument is invalid is plain to see.

The problem, if there is one, is not about completing, or finishing, an infinite task. It might be that the problem has something to do with starting such a task. But if so, there is really no point in talking about problems that involve the impossibility of finishing, as that is a different point.

3. Conclusion

There might be other reasons to think that one cannot count down from infinity, of course. Indeed, Loke and Erasmus both have more to say about this issue. But what one often finds in discussions like this is the following sort of move: they make an observation that applies to an endless series, and try to apply that to a beginningless series. As is simple to see, sometimes the initial observation (such as that there will always be more numbers left to count) just doesn’t apply to both. And the switch from one to the other is thereby not valid.