0. Introduction

One of the two philosophical arguments which is supposed to show that the history of the universe must be finite is the impossibility of forming an actual infinite by successive addition. I think this argument begs the question, because there is one premise which can only be true if we assume that the conclusion is true.

1. The argument

The argument, which can be seen here, looks like this:

1. A collection formed by successive addition cannot be actually infinite.
2. The temporal series of past events is a collection formed by successive addition.
3. Therefore, the temporal series of past events cannot be actually infinite.

I am going to grant premise 2 for the sake of the argument, although I think it could be questioned. All that I want to focus on is premise 1. This premise, it seems to me, can only be regarded as true if we assume that the conclusion is true.

First, what is ‘successive addition’? It means nothing more than continually adding one over and over again, 1 + 1 + 1 …, which is itself akin to counting whole numbers one at a time, 1, 2, 3 … . The idea is that such a process can never lead to anything but a finite result, as Craig explains:

“…since any finite quantity plus another finite quantity is always a finite quantity, we shall never arrive at infinity even if we keep on adding forever. Infinity in this case serves merely as a limit which we never attain.”

2. The counterexample

There is obviously a close connection between numbers and our concept of time. Exactly what that relationship is, doesn’t matter too much here. One thing that seems obvious though is that we routinely associate sequences of whole numbers with durations of time. Consider the convention which says that this year is 2019. What this means is that if there had been someone slowly counting off integers one per year since year 0, by now he would have counted up to the number 2019.

Adapting this familiar idea, we can postulate that there is some metronomic person counting off whole integers one every minute. After three minutes he will have counted up to the number 3, after one hundred minutes he will have counted up to the number 100, etc.

Let’s make this very simple and intuitive idea slightly more formal. Let us think of a counting function for this person. It takes an input, x, and returns an output, y. The value of x will be some amount of time that has passed (three minutes, one hundred minutes, etc), and the value of y will be whatever number has been counted to (3, 100, etc).

This counting function is therefore akin to asking the question:

‘After x units of time have passed, which number have they counted to?’

The value of y will be the answer to the question.

When Craig says “any finite quantity plus another finite quantity is always a finite quantity”, we can cash this out in our function as saying something like:

If the value of x is finite, then the value of y is finite.

No matter how much time has passed, so long as it is a finite amount of time, then the number that has been counted to must be merely some finite number.

However, what happens if the value of x is not finite (i.e. if it is transfinite)? Let’s suppose that the amount of time that has passed is greater than any finite amount, i.e. that an infinite amount of time has passed. Will the value of y still remain finite? Clearly, the answer here is no. After all the finite ordinal numbers comes the first transfinite ordinal number, ω. If the amount of time that has passed is greater than any finite amount, but less than any other transfinite amount, then the number that will have been counted to will be ω. That is, the function we have been using so far returns this value if we set the value of x to the right amount. But what this seems to say is that if you had been counting for an infinite amount of time, then the number you would have counted to would be greater than any finite number.

At no point have we specified that we are not using successive addition, i.e. counting. WE have explicitly said that this is what we are doing. All we have varied is how long we have been doing it for. The lesson seems to be that if you only count for a finite amount of time, then you cannot construct an actual infinite by successive addition, but if you do it for an actually infinite amount of time, then you can.

Thus, in order for premise 1 to be considered correct, we have to restrict the amount of time we spend counting to arbitrarily high finite amounts of time. If we place that restriction on, then the premise looks true. But if we take this restriction off, then the premise is false, as we just saw.

This means that whether the premise is true or false depends on whether we think that the value of x can be more than any finite number or not. And that just means whether the extent of past time can be infinite or not. If it can be, then we have enough time to have counted beyond any finite number. Yet, that is the very question we are supposed to be settling here. The conclusion of the argument is that the past is finite. Yet we need to suppose precisely this proposition in order to make the first premise true. Without it, the first premise is false.

Thus, the argument seems to simply beg the question here.

## More on the FreeThinking Argument

0. Introduction

In the last post, I explained the ‘FreeThinking Argument Against Naturalism’ (FAAN). I criticised premise 3, which was that “If libertarian free will does not exist, rationality and knowledge do not exist”. In this post, I will look at a reply Stratton gave to an objection that is somewhat similar to mine, in a post he wrote called Robots and Rationality. In the end, nothing he says helps at all.

1. Robots

Stratton says that some people object to premise 3 of his argument by saying that “computers seem to be rational and they do not possess libertarian free will”. Nevertheless, he thinks that he has a reply to this such that the “deductive conclusions of the Freethinking Argument remain unscathed”.

Right off the bat, Stratton begs the question against the view I outlined in the last post. Consider the very next line, in which he says:

“…simply by stating that computers are, or robots of the future could be, rational in a deterministic universe *assumes* that the determinist making this claim has, at least briefly, transcended their deterministic environment and freely inferred the best explanation (the one we ought to reach) via the process of rationality to correctly conclude that computers are, in fact, rational agents.”

But that’s wrong. The act of stating “computers seem to be rational and they do not possess libertarian free will” can be done in a deterministic universe, no problem. It doesn’t require ‘transcending the environment’. You could even make that statement in a deterministic, naturalistic universe, and have a justified true belief about it while you are doing it.

Here’s how. Assume internalism about justification, so that justifications are beliefs. That means that for me to have a justified true belief that p, the true belief needs to be supported by other beliefs, say q and r, which are the justifications for believing that p. There are two things we can also say about how the justifications need to be related to p. This is not the only way you could cash this out, but it will do for our purposes:

• They have to be related to p in the ‘right sort of way’ (arbitrary beliefs cannot be justifications), and
• It needs to be that my believing q and r is why I believe p (it’s no good for me to believe q and r, but believe p because the coin landed heads, etc).

What does it mean for q and r to be related to p in the ‘right sort of way’? This is obviously a very complicated question to answer. We don’t have to settle it here though. Let’s just consider clear cases. The relationship between them just needs to be such that q and r provide significant support for p; they raise the probability, at least the subjective assessment of the probability, of p by the person with the beliefs. A very clear case of this would be if q and r logically entailed p. Other examples would be if q and r were raised the probability of p far above 0.5, to something like 0.9. We don’t need to worry here about exactly where the line is though, because we will consider just a clear example, one where we have logical entailment, because that is clearly a justification. And we only need one example to show that the principle that Stratton appeals to is false, after all. So here it is.

Assume I have two beliefs:

A) My laptop seems to be rational

B) My laptop does not have LFW

I could have those beliefs in a deterministic, naturalistic universe, no problem. These beliefs would be brain states that I have (on assumption). Let’s say that they deterministically cause me to have another brain state, which is:

C) (at least some) computers seem to be rational and they do not possess LFW

Because this belief is caused by the first two, it’s true that I only believe it because I believe the first two. Yet those two paradigmatically justify the third. They logically entail the third. Anyone who believes that their laptop seems to be rational and that it does not have LFW, is thereby justified in believing that (at least some) computers are rational and do not possess LFW.

So on this proposal, I believe C, it is true, and I possess beliefs, A and B, which significantly raise the probability of C (by logically entailing it), and the having of A and B is why I believe C. Thus, it meets the criteria I gave above for counting as a justification for the true belief that I have.

Now, obviously, Stratton would object here. He thinks that the criteria for justifications of p should be that they are beliefs, that are related to p in the right sort of way, you believe p because you have the justifications, and also that believing p was a freely chosen action. But what is the reason that we should accept this additional criteria?

2. Coercion

Assume that some agent A punches some other agent B in the face. Suppose also that A has a desire to hurt B. A natural answer to the question for why A punched B (i.e. what the reason was for A’s action) is that he desired to hurt B. The action could be regarded as free, and the reason is part of why he did the free action.

However, imagine that we learned subsequently that A was a Manchurian candidate, and had been brainwashed, or hypnotised, such that given a certain trigger (maybe by seeing a woman in a polka-dot dress), then he would instinctively take a swing at B. Now, we might think, his antecedent desire to hurt B cannot really be the reason why he punched B. Given that he was compelled to do it (we might say caused to do it), by seeing the woman in the dress, that really isn’t the reason at all. Because he was coerced to do it by the brainwashing, he was not doing it because of the reason he had (his antecedent desire to hurt B). This seems plausible. And if it is right, then being coerced (or being caused) is incompatible with doing something because of having a motivation (like a desire). This could be questioned, but let’s grant it, for the sake of the argument.

Stratton does not give this sort of argument, but you could imagine that it is the sort of thing he has in mind to support his claim that a belief cannot be justified unless it is freely chosen. There does seem to be an analogy here. If the reason for an action cannot be a desire unless it is free of coercion, then maybe the justification for a belief cannot be another belief unless it is free of causation. Maybe each has to be free for it to count.

Even though there is some plausibility to the analogy to begin with, I think it is easy to start to see that the two cases are really quite different. Even if we grant that coercion completely rules out freely acting due to motivations (desires), the case where I come to believe something without willing to believe it is far less clearly problematic.

Consider a case where someone really wants to believe, say, that their son is innocent of a murder. They may, nevertheless, come to believe that the son is guilty during the trial, where all the evidence is presented. We could describe this situation as her being compelled to believe that her son was guilty despite her firm will to not believe this. Of course, this is not a mandatory reading, and no doubt other ways of describing this situation could be given as well. The point is just that this description seems far less problematic than assuming that A was both a brainwashed Manchurian candidate, and also acted with a desire as his reason. Being compelled to believe p, because the evidence caused you to do so, doesn’t seem incompatible with believing p with justification in the same way. Thus, the analogy is clearly questionable.

Stratton did not offer the coercion analogy as an argument against my position. I offered it on his behalf, because I don’t think he has an argument. But to me the analogy is not plausible, because even if you grant the action case, the belief case doesn’t seem problematic in the same way. What’s true about reasons for actions is not necessarily the same as what’s true about justifications for beliefs. And because of that we would need to see an argument to the effect that the claim about beliefs is true, and not just an appeal to the action case.

3. Luck

Stratton makes the following comments a few lines later on, where he appeals to the notion of luck:

“…if determinists happen to luckily be right about determinism, then they did not come to this conclusion based on rational deliberation by weighing competing views and then freely choosing to adopt the best explanation from the rules of reason via properly functioning cognitive faculties. No, given determinism, they were forced by chemistry and physics to hold their conclusion whether it is true or not.”

So the idea here is that I could believe that determinism is true, and be correct about that (I could be “right about determinism”), but that this is just a matter of luck. He is saying that, in general, on determinism, one could believe p simply because the causal history of the world happened to be such that I hold that belief. If so, then my holding the belief is unrelated to whether p is true, or what the justifications are for holding that p is true; it’s all a matter of what the causal history of the world is like and nothing more.

Three things.

Firstly, luck implies contingency. If an event is lucky, it has to be possible for it to have happened differently, or not at all. For it to be lucky that I won the lottery, it has to be actually possible that I could have lost. If I rigged the lottery so that it had to show my ticket number, then my winning is no longer a matter of luck. But on determinism, all events are necessary, because they couldn’t have happened differently. So while things might look as if they were lucky (in the sense that the rigged lottery result might look lucky), they weren’t really. And if so, then no belief that I hold is lucky.

In order to have lucky events on determinism, then we need contingency. The only way to get that is if the initial conditions of the universe are themselves contingent. But if the initial conditions themselves could have been different, then, since all subsequent events depend on that contingent event, it makes all events contingent, because for each event it could have gone differently. That is, the initial conditions could have been different, leading to the event being different. And if that is all that is required for an event to be considered as ‘lucky’, then all events are lucky, even on determinism. And that means that even if I was caused to believe that p, and p was true, and I was caused to believe it on the basis of justifications, this would still be lucky. The question then is if luck is cashed out like this, just how this undermines the claim that p is a justified true belief. It seems that it doesn’t.

Secondly, putting it in terms of the belief being ‘unrelated to the truth of p’ seems to beg the question against the view I have been defending here. It could be the case that the causal history of the world also includes me having the right types of beliefs, the sorts of beliefs that count as justifiers for p (such as ones which logically entail, or raise the probability a great deal that p is true), and these would be directly related to why I believe that p is true (they are part of the cause of me believing that p is true). If that is right, then it isn’t the case that my holding the belief is unrelated to whether p is true, even if it is lucky (in the sense described above).

Thirdly, Stratton says:

“…given determinism, they were forced by chemistry and physics to hold their conclusion whether it is true or not”

It seems to me that all this is saying is that on the determinist picture, it is possible to believe something false. But I could construct a parody of this, and say:

‘Given LFW, they freely choose to believe their conclusion, whether it is true or not’

After all, you could freely choose to believe something false. That shows that it is also possible on Stratton’ view to believe something false. And that means that regardless of whether the view is determined or freely chosen, it is possible for the belief to be false. So whether you are caused to believe p, whether it is true or false, or freely choose to believe p, whether it is true or false, we are in the same position.

Again, this shows how irrelevant it is to bring up the freeness of the belief. What is important is the justification for the belief. If the justifications are there, then the belief can be JTB, regardless of whether it is determined or freely chosen.

4. Liars

Stratton makes anther appeal:

“If you have reason to suspect a certain man is a liar, why should you believe this individual when he tells you that he is not a liar? Similarly, if we have reason to suspect we cannot freely think to infer the best explanation, why assume these specific thoughts (which are suspected of being unreliable) are reliable regarding computers?”

Thinking that someone is a liar is reason to not trust what they say to you. Fair enough. The problem now is that trusting someone who is a liar, which means someone with a track record of often lying, is not relevantly analogous to thinking that the inferences made by someone is determined are not reliable. It would be, of course, if you considered someone who was determined and who had a track record of making incorrect inferences. But then, the track record is doing all the work, and the determinism is doing none of the work. I wouldn’t trust someone with LFW who had the same track record of lying either.

The problem here is that even if you “have reason to suspect we cannot freely think to infer the best explanation”, that isn’t itself reason to conclude that they are “suspected of being unreliable”. That is, even if you have reason to think that we cannot freely infer the best explanation, that doesn’t on its own mean we cannot infer the best explanation.

What matters is if the process of belief formation takes into account the justifications for holding the belief. Whether it is a determined process or one that involves a free choice is irrelevant.

For example, think of a robot which is equipped with a mechanism that analyses a target at a firing range and processes the information it receives in such a way that it reliably hits a bullseye nine times out of ten. Even though its mechanism is deterministic, that doesn’t mean it is unreliable.

Compare the robot with a free individual, with LFW, who also hits the target nine times out of ten. The reliability of their shooting is something you evaluate by looking at their record of success, and by examining the process by which they came to hit the target. If everything else is equal (they hit the same number of targets, and the internal mechanism of the robot is relevantly similar to the way the person’s eye and brain allow them to determine where to aim the gun), then the freedom itself doesn’t play any role in our assessment of which one is more reliable.

Yet, Stratton makes the assumption that the lack of freedom is a reason to doubt reliability. He says that “if we have reason to suspect we cannot freely think to infer the best explanation” then we have reason to suspect that they cannot infer the best explanation. This seems to me to be false. A sufficiently advanced robot could reliably draw the right inferences yet not have LFW.

5. Self-refutation

Stratton says:

“…the naturalist who states that he freely thinks determinism is true is similar to one arguing that language does not exist, by using English to express that thought.”

But here, surely, the problem is that anyone who states that he (LFW-) freely thinks determinism is true is uttering a contradiction. They are saying both that they believe they are free (in the LFW sense) and the determinism is true, and surely they cannot both be true. Making such a statement would be contradictory.

But, as should be clear by now, the determinist need not make such a statement. Rather than saying that they freely think that determinism is true, they should say that their belief that determinism is true is also determined. When said like that, there is no hint of self-refutation here.

He goes on:

“Until naturalists demonstrate exactly how a determined conclusion, which cannot be otherwise and is caused by nothing but physics and chemistry, can be rationally inferred and affirmed, then the rest of their argument has no teeth in its bite as it is incoherent and built upon unproven assumptions.”

I hope that by now the general idea of how this would work is clear. What has to be made explicit is that beliefs can be caused, but so long as they are caused by other beliefs (brain states, if you like), then they can still stand in the same relation to justifiers as they do on any other JTB view. So the question of ‘Why do you believe that (at least some) computers seem to be rational and they do not possess LFW?’ is answered by saying ‘Because I believe that my computer is rational and it does not have LFW’. That answer is true, even though there is also a story we could tell about how some bit of brain chemistry lead to some other bit. If those bits of brain chemistry are beliefs, then both ways of talking are true.

This is a familiar line. Why did the allies win world war two? Because Hitler overreached by invading Russia. That’s true. But, of course, there is a much more detailed story involving the precise movements of every regiment across the whole of the world. There is another story that involves the movement of all the atoms across the whole world too. All three of these are true. The fact that the much more detailed story about atoms is true doesn’t mean the others are not. It doesn’t mean the others are reducible to the story about atoms either (maybe they are, but maybe they aren’t).

The same sort of thing is going on here. There is a story about what happens at a chemical level in my brain, and another one about what beliefs I believe on the basis of other beliefs. If naturalism is true, beliefs are something like brain states. If determinism is true, then they can cause other brain states to exist. So long as this causal chemical set of reactions is correlated reliably with inferring the best explanations, then it is as good as the LFW account.

But are they correlated in that way? Well, not by default. The actions we engage in train them up. Learning to speak, going to school, reading philosophy, etc. These sorts of things  make us better at inferring the right things from our beliefs. But that can be told as a chemical causal story too. When I study I am causing my brain to make more reliable connections more often. The pathways in my brain become intrenched in certain ways, leading to me more often getting it right. Not always, but often enough to count as being rational (rationality comes in degrees, after all). Nothing about this requires LFW. All of those actions can be deterministically caused.

6. Conclusion

Stratton ends with this:

“If all is ultimately determined by nature, then all thoughts — including what humans think about the rationality of computers — cannot be otherwise. We are simply left assuming that our thoughts (which we are not responsible for) regarding computers are good, the best, or true. We do not have a genuine ability to think otherwise or really consider competing hypotheses at all.”

Firstly, note that now he is insisting that on determinism, our thoughts cannot be otherwise. If that’s right, then they should not be regarded as being lucky, or unlucky.

But regardless of that, he says that in that situation, we “are simply left assuming that our thoughts … regarding computers are good, the best, or true.” But as I showed here, we are not left in that situation. I could come to that conclusion because I also have other beliefs, which are relevant to that conclusion. He is saying that if we are determined, then we are left with nothing but assumptions. He is saying that if we are determined, then we cannot think about competing hypotheses and weigh options against each other. This is clearly incorrect. All we cannot do if we are determined is freely do those things. What we can do, if we are determined, is do those things.

Thus, the third premise of his argument fails. Nothing he says in his Robots and Rationality article helps, at all.

## The problem with the FreeThinking Argument Against Naturalism

0. Introduction

Tim Stratton is an apologist who runs the website FreeThinkingMinistries. He has an argument he calls the Free Thinking Argument Against Naturalism (FAAN). It works like this: ‘thinking freely’ requires libertarian freewill, and this requires having a soul, and this requires that God exists, and if God exists naturalism is false. Here is how he puts it in his article ‘The FreeThinking Argument in a Nutshell‘:

1. If naturalism is true, the immaterial human soul does not exist.
2. If the soul does not exist, libertarian free will does not exist.
3. If libertarian free will does not exist, rationality and knowledge do not exist.
4. Rationality and knowledge exist.
5. Therefore, libertarian free will exists.
6. Therefore, the soul exists.
7. Therefore, naturalism is false.
8. The best explanation for the existence of the soul is God.

In this post, I will set out a quick problem with this argument.

1. Justification

The main problem, as I see it, is with premise 3. Here is what Stratton says about this:

“…it logically follows that if naturalism is true, then atheists — or anyone else for that matter — cannot possess knowledge. Knowledge is defined as “justified true belief.” One can happen to have true beliefs; however, if they do not possess warrant or justification for a specific belief, their belief does not qualify as a knowledge claim. If one cannot freely infer the best explanation, then one has no justification that their belief really is the best explanation. Without justification, knowledge goes down the drain. All we are left with is question-begging assumptions (a logical fallacy).”

Stratton uses ‘justified true belief’ as the definition of knowledge, which seems a bit out of date with how contemporary epistemology thinks about it, but let’s pass over that and just play along.

Given that he says that on naturalism “[o]ne can happen to have true beliefs”, he seems to be conceding that true beliefs are possible on naturalism, but that having justification for true beliefs is not. So the question becomes: what is it about naturalism that rules out justification? However, all he says about why we would not be able to have justification on naturalism is that:

“If one cannot freely infer the best explanation, then one has no justification that their belief really is the best explanation.”

What is going on here?

2. Determinism

Let’s play along with the idea that on naturalism, “all that exists is causally determined via the laws of nature and the initial conditions of the big bang”. It doesn’t seem to be required to me. After all, the laws of physics could be indeterministic. Naturalism (plausibly) says that there are no non-natural causes, but doesn’t say that every state is determined by the initial state of the universe. Perhaps, as quantum theory seems to suggest, the laws of physics are indeterministic, and the evolution of the world is chancy. That might be correct, or it might be incorrect. Stipulating naturalism doesn’t on its own seem to settle this question though. But let’s just grant it anyway, just to see where it goes.

The question is: on naturalism, and determinism, if I have a true belief, can I have justification for that true belief? Stratton is saying ‘no’, and his reason seems to be that this is because I “cannot freely infer the best explanation”.

But why should I have to freely infer anything? I don’t think freedom, of the type he is suggesting, is required at all. Here is how that could work.

Suppose that strict determinism is true, such that “people are nothing more than material mechanisms bound by the laws of chemistry and physics”, “bags of chemicals on bones,” or “meat robots”, certainly not possessing a soul or libertarian free will. If so, then each of our beliefs will have been caused to be in our mind (or in our brain) by some antecedent state of affairs, which was itself caused, etc etc, in a chain going back to the initial state of the universe. It is logically possible that I could have believed otherwise than I do, but really there was never any physical possibility that I was going to.

3. The Counterexample

Let us suppose that in this situation, I have the belief:

A) Tim Stratton is the author of the FAAN

It is a true belief (presumably). But can I have justification for it if naturalism and determinism are true? Let us suppose also that I have the further two true beliefs as well:

B) There are various articles and YouTube videos by Tim Stratton in which he presents the FAAN, and in which claims to be the author of the argument.

C) Nobody would make an easily detectable false claim to authorship of an argument in so many articles and YouTube videos.

Nothing about naturalism or determinism prevents me from having these two beliefs. Perhaps they have to be merely brain states on naturalism, rather than ‘mental states’ (supposing that phrase to mean something other than brain states). Let’s suppose that as well for the sake of the argument.

It seems to me that nothing Stratton has said so far rules out the possibility that the brain states associated with me having beliefs B and C are part of the causal story involved in me having the belief A. It may be that something about the chemical reactions happening in the brain when I entertain both B and C causes me to have this belief A.

The question then would be: why my having beliefs B and C doesn’t count as justification for believing that A? In other words, why isn’t it a justification of my belief that Stratton authored the FAAN that I also believe that he has said it many times in articles and videos, and that people generally don’t pretend to have authored arguments like that?

This seems like a perfectly coherent situation. I actually do have the belief that he is the author of the argument for more or less those very reasons. I’ve never met him; I didn’t see him write the argument; I wasn’t with him when he first thought of it. I go off the evidence I have (the articles and videos) along with my assessment of how likely they are to be reliable (based on the thought that people generally don’t completely make up authorship of arguments like that). I didn’t freely pick any of those beliefs. Reading his articles caused me to believe that he says he authored them in the articles. My experience with people also caused me to come to believe people don’t generally make up easily detectable falsehoods. On the basis of those (let’s suppose: caused by those) I came to believe he authored the argument. This seems perfectly coherent. But if so, then I can have ‘rationality and knowledge’ without libertarian free will, and thus premise 3 is false.

3. Conclusion

If Stratton thinks that this cannot be a justification, for some reason, then he has not spelled it out that I know of. Nor do I understand how that would go. To show that such a situation cannot be an instance of a justified belief, he would have to show that such a situation is impossible (cannot happen), or that it is possible but cannot count as a justification. To me it obviously can happen even granting naturalism and determinism. All it requires is the holding of true beliefs (which Stratton explicitly allows in that situation) and that beliefs can be causally related to one another. But I supposed for the argument that beliefs are simply brain states, which are physical states, and the sorts of things that “bags of chemicals on bones” or “meat robots” could have. Obviously, they could be causally related; physical states can be causally related, brain states included.

Given all that in the counterexample, I have a true belief, A, and I have relevant beliefs, B and C, and it is on the basis of having those beliefs that I believe A. The thing that is important about whether B and C count as justifying belief A is how relevant they are to A, but not about whether they are casually related to my having belief A or not. The causal question seems irrelevant, so long as they are of the right type, and I believe A because I believe them. Both of those conditions are met here, so it counts as an instance of justification. Thus, the argument is unsound.

There are many other ways one could argue against FAAN, but I wanted to present this one. It is not my argument, but comes from Peter Van Inwagen, in his paper ‘C. S. Lewis’ Argument Against Naturalism‘. In reality, Stratton’s argument at this point is just a rehashed version of Lewis’ argument, and fails for the same reasons.