In the last post, I explained the ‘FreeThinking Argument Against Naturalism’ (FAAN). I criticised premise 3, which was that “If libertarian free will does not exist, rationality and knowledge do not exist”. In this post, I will look at a reply Stratton gave to an objection that is somewhat similar to mine, in a post he wrote called Robots and Rationality. In the end, nothing he says helps at all.
Stratton says that some people object to premise 3 of his argument by saying that “computers seem to be rational and they do not possess libertarian free will”. Nevertheless, he thinks that he has a reply to this such that the “deductive conclusions of the Freethinking Argument remain unscathed”.
Right off the bat, Stratton begs the question against the view I outlined in the last post. Consider the very next line, in which he says:
“…simply by stating that computers are, or robots of the future could be, rational in a deterministic universe *assumes* that the determinist making this claim has, at least briefly, transcended their deterministic environment and freely inferred the best explanation (the one we ought to reach) via the process of rationality to correctly conclude that computers are, in fact, rational agents.”
But that’s wrong. The act of stating “computers seem to be rational and they do not possess libertarian free will” can be done in a deterministic universe, no problem. It doesn’t require ‘transcending the environment’. You could even make that statement in a deterministic, naturalistic universe, and have a justified true belief about it while you are doing it.
Here’s how. Assume internalism about justification, so that justifications are beliefs. That means that for me to have a justified true belief that p, the true belief needs to be supported by other beliefs, say q and r, which are the justifications for believing that p. There are two things we can also say about how the justifications need to be related to p. This is not the only way you could cash this out, but it will do for our purposes:
- They have to be related to p in the ‘right sort of way’ (arbitrary beliefs cannot be justifications), and
- It needs to be that my believing q and r is why I believe p (it’s no good for me to believe q and r, but believe p because the coin landed heads, etc).
What does it mean for q and r to be related to p in the ‘right sort of way’? This is obviously a very complicated question to answer. We don’t have to settle it here though. Let’s just consider clear cases. The relationship between them just needs to be such that q and r provide significant support for p; they raise the probability, at least the subjective assessment of the probability, of p by the person with the beliefs. A very clear case of this would be if q and r logically entailed p. Other examples would be if q and r were raised the probability of p far above 0.5, to something like 0.9. We don’t need to worry here about exactly where the line is though, because we will consider just a clear example, one where we have logical entailment, because that is clearly a justification. And we only need one example to show that the principle that Stratton appeals to is false, after all. So here it is.
Assume I have two beliefs:
A) My laptop seems to be rational
B) My laptop does not have LFW
I could have those beliefs in a deterministic, naturalistic universe, no problem. These beliefs would be brain states that I have (on assumption). Let’s say that they deterministically cause me to have another brain state, which is:
C) (at least some) computers seem to be rational and they do not possess LFW
Because this belief is caused by the first two, it’s true that I only believe it because I believe the first two. Yet those two paradigmatically justify the third. They logically entail the third. Anyone who believes that their laptop seems to be rational and that it does not have LFW, is thereby justified in believing that (at least some) computers are rational and do not possess LFW.
So on this proposal, I believe C, it is true, and I possess beliefs, A and B, which significantly raise the probability of C (by logically entailing it), and the having of A and B is why I believe C. Thus, it meets the criteria I gave above for counting as a justification for the true belief that I have.
Now, obviously, Stratton would object here. He thinks that the criteria for justifications of p should be that they are beliefs, that are related to p in the right sort of way, you believe p because you have the justifications, and also that believing p was a freely chosen action. But what is the reason that we should accept this additional criteria?
Assume that some agent A punches some other agent B in the face. Suppose also that A has a desire to hurt B. A natural answer to the question for why A punched B (i.e. what the reason was for A’s action) is that he desired to hurt B. The action could be regarded as free, and the reason is part of why he did the free action.
However, imagine that we learned subsequently that A was a Manchurian candidate, and had been brainwashed, or hypnotised, such that given a certain trigger (maybe by seeing a woman in a polka-dot dress), then he would instinctively take a swing at B. Now, we might think, his antecedent desire to hurt B cannot really be the reason why he punched B. Given that he was compelled to do it (we might say caused to do it), by seeing the woman in the dress, that really isn’t the reason at all. Because he was coerced to do it by the brainwashing, he was not doing it because of the reason he had (his antecedent desire to hurt B). This seems plausible. And if it is right, then being coerced (or being caused) is incompatible with doing something because of having a motivation (like a desire). This could be questioned, but let’s grant it, for the sake of the argument.
Stratton does not give this sort of argument, but you could imagine that it is the sort of thing he has in mind to support his claim that a belief cannot be justified unless it is freely chosen. There does seem to be an analogy here. If the reason for an action cannot be a desire unless it is free of coercion, then maybe the justification for a belief cannot be another belief unless it is free of causation. Maybe each has to be free for it to count.
Even though there is some plausibility to the analogy to begin with, I think it is easy to start to see that the two cases are really quite different. Even if we grant that coercion completely rules out freely acting due to motivations (desires), the case where I come to believe something without willing to believe it is far less clearly problematic.
Consider a case where someone really wants to believe, say, that their son is innocent of a murder. They may, nevertheless, come to believe that the son is guilty during the trial, where all the evidence is presented. We could describe this situation as her being compelled to believe that her son was guilty despite her firm will to not believe this. Of course, this is not a mandatory reading, and no doubt other ways of describing this situation could be given as well. The point is just that this description seems far less problematic than assuming that A was both a brainwashed Manchurian candidate, and also acted with a desire as his reason. Being compelled to believe p, because the evidence caused you to do so, doesn’t seem incompatible with believing p with justification in the same way. Thus, the analogy is clearly questionable.
Stratton did not offer the coercion analogy as an argument against my position. I offered it on his behalf, because I don’t think he has an argument. But to me the analogy is not plausible, because even if you grant the action case, the belief case doesn’t seem problematic in the same way. What’s true about reasons for actions is not necessarily the same as what’s true about justifications for beliefs. And because of that we would need to see an argument to the effect that the claim about beliefs is true, and not just an appeal to the action case.
Stratton makes the following comments a few lines later on, where he appeals to the notion of luck:
“…if determinists happen to luckily be right about determinism, then they did not come to this conclusion based on rational deliberation by weighing competing views and then freely choosing to adopt the best explanation from the rules of reason via properly functioning cognitive faculties. No, given determinism, they were forced by chemistry and physics to hold their conclusion whether it is true or not.”
So the idea here is that I could believe that determinism is true, and be correct about that (I could be “right about determinism”), but that this is just a matter of luck. He is saying that, in general, on determinism, one could believe p simply because the causal history of the world happened to be such that I hold that belief. If so, then my holding the belief is unrelated to whether p is true, or what the justifications are for holding that p is true; it’s all a matter of what the causal history of the world is like and nothing more.
Firstly, luck implies contingency. If an event is lucky, it has to be possible for it to have happened differently, or not at all. For it to be lucky that I won the lottery, it has to be actually possible that I could have lost. If I rigged the lottery so that it had to show my ticket number, then my winning is no longer a matter of luck. But on determinism, all events are necessary, because they couldn’t have happened differently. So while things might look as if they were lucky (in the sense that the rigged lottery result might look lucky), they weren’t really. And if so, then no belief that I hold is lucky.
In order to have lucky events on determinism, then we need contingency. The only way to get that is if the initial conditions of the universe are themselves contingent. But if the initial conditions themselves could have been different, then, since all subsequent events depend on that contingent event, it makes all events contingent, because for each event it could have gone differently. That is, the initial conditions could have been different, leading to the event being different. And if that is all that is required for an event to be considered as ‘lucky’, then all events are lucky, even on determinism. And that means that even if I was caused to believe that p, and p was true, and I was caused to believe it on the basis of justifications, this would still be lucky. The question then is if luck is cashed out like this, just how this undermines the claim that p is a justified true belief. It seems that it doesn’t.
Secondly, putting it in terms of the belief being ‘unrelated to the truth of p’ seems to beg the question against the view I have been defending here. It could be the case that the causal history of the world also includes me having the right types of beliefs, the sorts of beliefs that count as justifiers for p (such as ones which logically entail, or raise the probability a great deal that p is true), and these would be directly related to why I believe that p is true (they are part of the cause of me believing that p is true). If that is right, then it isn’t the case that my holding the belief is unrelated to whether p is true, even if it is lucky (in the sense described above).
Thirdly, Stratton says:
“…given determinism, they were forced by chemistry and physics to hold their conclusion whether it is true or not”
It seems to me that all this is saying is that on the determinist picture, it is possible to believe something false. But I could construct a parody of this, and say:
‘Given LFW, they freely choose to believe their conclusion, whether it is true or not’
After all, you could freely choose to believe something false. That shows that it is also possible on Stratton’ view to believe something false. And that means that regardless of whether the view is determined or freely chosen, it is possible for the belief to be false. So whether you are caused to believe p, whether it is true or false, or freely choose to believe p, whether it is true or false, we are in the same position.
Again, this shows how irrelevant it is to bring up the freeness of the belief. What is important is the justification for the belief. If the justifications are there, then the belief can be JTB, regardless of whether it is determined or freely chosen.
Stratton makes anther appeal:
“If you have reason to suspect a certain man is a liar, why should you believe this individual when he tells you that he is not a liar? Similarly, if we have reason to suspect we cannot freely think to infer the best explanation, why assume these specific thoughts (which are suspected of being unreliable) are reliable regarding computers?”
Thinking that someone is a liar is reason to not trust what they say to you. Fair enough. The problem now is that trusting someone who is a liar, which means someone with a track record of often lying, is not relevantly analogous to thinking that the inferences made by someone is determined are not reliable. It would be, of course, if you considered someone who was determined and who had a track record of making incorrect inferences. But then, the track record is doing all the work, and the determinism is doing none of the work. I wouldn’t trust someone with LFW who had the same track record of lying either.
The problem here is that even if you “have reason to suspect we cannot freely think to infer the best explanation”, that isn’t itself reason to conclude that they are “suspected of being unreliable”. That is, even if you have reason to think that we cannot freely infer the best explanation, that doesn’t on its own mean we cannot infer the best explanation.
What matters is if the process of belief formation takes into account the justifications for holding the belief. Whether it is a determined process or one that involves a free choice is irrelevant.
For example, think of a robot which is equipped with a mechanism that analyses a target at a firing range and processes the information it receives in such a way that it reliably hits a bullseye nine times out of ten. Even though its mechanism is deterministic, that doesn’t mean it is unreliable.
Compare the robot with a free individual, with LFW, who also hits the target nine times out of ten. The reliability of their shooting is something you evaluate by looking at their record of success, and by examining the process by which they came to hit the target. If everything else is equal (they hit the same number of targets, and the internal mechanism of the robot is relevantly similar to the way the person’s eye and brain allow them to determine where to aim the gun), then the freedom itself doesn’t play any role in our assessment of which one is more reliable.
Yet, Stratton makes the assumption that the lack of freedom is a reason to doubt reliability. He says that “if we have reason to suspect we cannot freely think to infer the best explanation” then we have reason to suspect that they cannot infer the best explanation. This seems to me to be false. A sufficiently advanced robot could reliably draw the right inferences yet not have LFW.
“…the naturalist who states that he freely thinks determinism is true is similar to one arguing that language does not exist, by using English to express that thought.”
But here, surely, the problem is that anyone who states that he (LFW-) freely thinks determinism is true is uttering a contradiction. They are saying both that they believe they are free (in the LFW sense) and the determinism is true, and surely they cannot both be true. Making such a statement would be contradictory.
But, as should be clear by now, the determinist need not make such a statement. Rather than saying that they freely think that determinism is true, they should say that their belief that determinism is true is also determined. When said like that, there is no hint of self-refutation here.
He goes on:
“Until naturalists demonstrate exactly how a determined conclusion, which cannot be otherwise and is caused by nothing but physics and chemistry, can be rationally inferred and affirmed, then the rest of their argument has no teeth in its bite as it is incoherent and built upon unproven assumptions.”
I hope that by now the general idea of how this would work is clear. What has to be made explicit is that beliefs can be caused, but so long as they are caused by other beliefs (brain states, if you like), then they can still stand in the same relation to justifiers as they do on any other JTB view. So the question of ‘Why do you believe that (at least some) computers seem to be rational and they do not possess LFW?’ is answered by saying ‘Because I believe that my computer is rational and it does not have LFW’. That answer is true, even though there is also a story we could tell about how some bit of brain chemistry lead to some other bit. If those bits of brain chemistry are beliefs, then both ways of talking are true.
This is a familiar line. Why did the allies win world war two? Because Hitler overreached by invading Russia. That’s true. But, of course, there is a much more detailed story involving the precise movements of every regiment across the whole of the world. There is another story that involves the movement of all the atoms across the whole world too. All three of these are true. The fact that the much more detailed story about atoms is true doesn’t mean the others are not. It doesn’t mean the others are reducible to the story about atoms either (maybe they are, but maybe they aren’t).
The same sort of thing is going on here. There is a story about what happens at a chemical level in my brain, and another one about what beliefs I believe on the basis of other beliefs. If naturalism is true, beliefs are something like brain states. If determinism is true, then they can cause other brain states to exist. So long as this causal chemical set of reactions is correlated reliably with inferring the best explanations, then it is as good as the LFW account.
But are they correlated in that way? Well, not by default. The actions we engage in train them up. Learning to speak, going to school, reading philosophy, etc. These sorts of things make us better at inferring the right things from our beliefs. But that can be told as a chemical causal story too. When I study I am causing my brain to make more reliable connections more often. The pathways in my brain become intrenched in certain ways, leading to me more often getting it right. Not always, but often enough to count as being rational (rationality comes in degrees, after all). Nothing about this requires LFW. All of those actions can be deterministically caused.
Stratton ends with this:
“If all is ultimately determined by nature, then all thoughts — including what humans think about the rationality of computers — cannot be otherwise. We are simply left assuming that our thoughts (which we are not responsible for) regarding computers are good, the best, or true. We do not have a genuine ability to think otherwise or really consider competing hypotheses at all.”
Firstly, note that now he is insisting that on determinism, our thoughts cannot be otherwise. If that’s right, then they should not be regarded as being lucky, or unlucky.
But regardless of that, he says that in that situation, we “are simply left assuming that our thoughts … regarding computers are good, the best, or true.” But as I showed here, we are not left in that situation. I could come to that conclusion because I also have other beliefs, which are relevant to that conclusion. He is saying that if we are determined, then we are left with nothing but assumptions. He is saying that if we are determined, then we cannot think about competing hypotheses and weigh options against each other. This is clearly incorrect. All we cannot do if we are determined is freely do those things. What we can do, if we are determined, is do those things.
Thus, the third premise of his argument fails. Nothing he says in his Robots and Rationality article helps, at all.