The Fine-Tuning Argument and the Base Rate Fallacy.

0. Introduction

The Fine-Tuning Argument is used by many apologists, such as William Lane Craig. It is a common part of the contemporary apologetical repertoire. However, I argue that it provides no reason to think that the universe was designed. One does not need to look in too much detail about actual physics, and almost the whole set up can be conceded to the apologist. The objection is a version of the base-rate fallacy. From relatively simple considerations of the issue, it is clear that relevant variables are being left out of the equation which results in the overall probability being impossible to assess.

The Fine Tuning Argument starts with an observation about the values of various parameters in physics, such as the speed of light, the Plank constant and the mass of the electron, etc. The idea is that they are all delicately balanced, such that if one were to be changed by even a very small amount, this would radically alter the properties of the universe. Here is how Craig explains the point, in relation to the gravitational constant:

“If the gravitational constant had been out of tune by just one of these infinitesimally small increments, the universe would either have expanded and thinned out so rapidly that no stars could form and life couldn’t exist, or it would have collapsed back on itself with the same result: no stars, no planets, no life.” (Quote taken from here)

This phenomenon of ‘fine-tuning’ requires explanation, and Craig thinks that there are three possible types of explanation: necessity, chance or design.

Craig rules out necessity by saying:

“Is a life-prohibiting universe impossible? Far from it! It’s not only possible; it’s far more likely than a life-permitting universe. The constants and quantities are not determined by the laws of nature. There’s no reason or evidence to suggest that fine-tuning is necessary.” (ibid)

Chance is ruled out by the following:

“The probabilities involved are so ridiculously remote as to put the fine-tuning well beyond the reach of chance.” (ibid)

The only option that seems to be left on the table is design.

So the structure of the argument is as follows (where f = ‘There is fine-tuning’, n = ‘Fine-tuning is explained by necessity’, c = ‘Fine-tuning is explained by chance’, and d = ‘Fine tuning is explained by design’):

  1. f
  2. f → (n ∨ c ∨ d)
  3. ~n
  4. ~c
  5. Therefore, d.

1. Tuning

It seems from what we currently know about physics that there are about 20 parameters which are finely tuned in our universe (if the number is not exactly 20, this doesn’t matter – for what follows I will assume that it is 20). For the sake of clarity, let’s just consider one of these, and assume that it is a sort of range of values similar to a section of the real number line. This would make it somewhat like radio-wave frequencies. Then the ‘fine-tuning’ result that Craig is referring to has a nice analogy: our universe is a ‘radio station’ which broadcasts on only an extremely narrow range. This range is so narrow that if the dial were to be moved only a tiny amount, the coherence of the music that was being broadcast becomes nothing but white noise. That our universe is finely balanced like this is the result that has been gained from physics.

It is important to realise that this fine-tuning is logically compatible with there being other radio stations which one could ‘tune into’. Imagine I tune my radio into a frequency which is broadcasting some music, and that it is finely-tuned, so that if I were to nudge the dial even a tiny amount it would become white noise; from that it does not follow that there aren’t other radio stations I could tune into.

It is plausible (although I don’t know enough physics to know) that if one varied only one of the 20 or so parameters, such as gravity, to any extent (not just a small amount), but kept all the others fixed, then the result would be nothing other than white noise. Maybe, if you hold all 19 other values fixed, every other possible value for gravity results in noise. However, it doesn’t follow from this fact (if it is a fact at all) that there is no combination of all the values which results in a coherent structure. It might be that changing both gravity and the speed of light, and keeping all the others fixed, somehow results in a different, but equally coherent, universe.

In mathematics, a Lissajous figure is a graph of a system of parametric equations. These can be displayed on oscilloscopes, and lead to various rather beautiful patterns. Without going into any of the details (which are irrelevant), the point is that by varying the ratio of the two values (X and Y), one produces different patterns. Some combinations of values produce ordered geometrical structures, like lines or circles, while others produce what looks like a messy scribble. There are ‘pockets’ of order, which are divided by boundaries of ‘chaos’. This could be what the various combinations of values for the 20 physical parameters are like.

Fine-tuning says that immediately on either side of the precise values that these parameters have in our universe, there is ‘white noise’. But it does not say that there are no other combinations of values give rise to pockets of order just as complex as ours. It doesn’t say anything about that.

2. The problem of fine-tuning 

It might be replied that there could be a method for determining whether there are other pockets of order out there or if it is just white noise everywhere apart from these values, i.e. whether there are other radio stations than the one we are listening to or not. And maybe there is such a method in principle. However, it seems very unlikely that we have anything approaching it at the moment. And here the fineness of the fine-tuning turns back against the advocate of the fine-tuning argument. Here’s why it seems unlikely we will be able to establish this any time soon.

We are given numbers which are almost impossible to imagine for how unlikely the set of values we have would be if arrived at by chance. Craig suggests that if the gravitational constant were altered by one part in 10 to the 60th power (that’s 10 with 60 ‘0’s after it), then the universe as we know it would not exist. That’s a very big number. If each of the 20 parameters were this finely tuned, then each one would increase this number again by that amount. The mind recoils at how unlikely that is. This is part of the point of the argument, and why it seems like fine-tuning requires an explanation.

However, this is also a measure of how difficult it would be to find an alternative pocket of order in the sea of white noise. Imagine turning the dial of your radio trying to find a finely-tuned radio station, where if you turned the dial one part in 10 to the 60th power too far you would miss it. The chances are that you would roll right past it without realising it was there. This is Craig’s whole point. It would be very easy to scan through the frequency and miss it. But if you wanted to make the case that we had determined that there could be no other coherent combination of values to the parameters, you would have to be sure you had not accidentally scrolled past one of these pockets of coherence when you did whatever you did to rule them out. The scale of how fine the fine-tuning is also makes the prospect of being able to rule out other pockets of coherence in the sea of noise almost impossible to do. It would be like trying to find a needle in 10 to the 60th power of haystacks. Maybe there is a method of doing that, but it seems like an incredibly hard thing to do. The more the apologist adds numbers for the magnitude of fine-tuning, the more difficult it is to rule out there being other possible coherent combinations of values out there somewhere.

Thus, it seems like the prospects of discovering a fine-tuned pocket of coherence in the sea of white noise are extremely slim. But this just means that it seems almost impossible to be able to rule out the possibility that there is such additional a pocket of coherence hidden away somewhere.

Think about it from the other side. If things had gone differently, and the values of the parameters had been set differently, then there might be some weird type of alien trying to figure out if there were other pockets of coherence in the range of possible values for the parameters, and they would be extremely unlikely to find ours, precisely because ours (as Craig is so keen to express) is so delicately balanced. Thus the fine-tuning comes back to haunt the apologist here.

We have a pretty good understanding of what the values for the parameters are for our universe, although this is obviously the sort of thing that could (and probably will) change as our understanding deepens. But I do not think that we have a good understanding of what sort of universe would result throughout all the possible variations of values to the parameters. It is one thing to be able to say that immediately on either side of the values that our universe has there is white noise, and quite another to be able to say that there is no other pocket of coherence in the white noise anywhere.

The fine tuning result is like if you vote for party X, and your immediate neighbours on either side vote for party Y. You might be the only person in the whole country who votes for party X, but it doesn’t follow that this is the case just because you know that your neighbours didn’t.

If the above string of reasoning is correct, then for all the fine tuning result shows, there may be pockets of coherence all over the range of possible values for the parameters. There are loads of possible coherent Lissajous figures between the ‘scribbles’, and this might be how coherent universes are distributed against the white noise. There could be trillions of different combinations of values for the parameters which result in a sort of coherent universe, for all we know. And the magnitude of the numbers which the apologist wants to use to stress how unlikely it is that this very combination would come about by chance, is also a measure of how difficult it would be to find one if it were there.

3. The meaning of ‘life’

It seems that if the above reasoning is right, then other pockets of coherence are at least epistemically possible (i.e. possible for all we know). Let’s assume, just for simplicity, that there are at least some such alternative ways the parameters could be set which results in comparably stable and coherent universes as ours. Let’s also suppose that these are all as finely tuned as our universe is. For all we know, this is actually the case. But if it is the case, then it suggests a distinction between a universe is finely-tuned, and one that is fine-tuned for life. We might think that those other possible universes would be finely tuned, but not finely tuned for life because we could not exist in those universes. We are made of matter, which could not exist in those circumstances. It might be that something else which is somehow a bit like matter exists in those universes, but it would not be matter as we know it. Those places are entirely inhospitable to us.

 

But this doesn’t mean that they are not finely-tuned for life. It just means that they are not finely-tuned for us. The question we should really be addressing is whether anything living could exist in those universes.

Whether this is possible, of course, depends on precisely what we mean by ‘life’. This is obviously a contentious issue, but it seems to me that there are two very broad ways we could approach the issue, which are relevant for this discussion. Let’s call one ‘wide’ and one ‘narrow’.

Here is an example of a wide definition of ‘life’. For the sake of argument, let’s say that living things all have the following properties:

  • The capacity for growth
  • The capacity for reproduction
  • Some sort of functional interaction with their environment, possibly intentional

No doubt, there will be debate over the conditions that could be added, or removed, from this very partial and over-simplified list, and the details do not matter here. However, just note one thing about this list; none of these properties require the parameters listed in the usual presentations of the fine-tuning argument to take any particular value. So long an entity can grow, reproduce and interact with its environment, then it is living, regardless of whether it is made of atoms or some alien substance, such as schmatoms. Thus, on such a ‘wide’ definition of ‘life’, there is no a priori reason why ‘life’ could not exist in other universes, even if we couldn’t.

On the other hand, we might define ‘life’ in terms of something which is native to our universe, such as carbon molecules, or DNA. If, for example, the gravitational constant were even slightly different to how it is, then DNA could not exist. Thus, if life has to be made of DNA, then life could not exist in any pocket of coherence in the sea of white noise apart from ours.

So there are two ways of answering the question of whether an alternative set of values to the parameters which resulted in a coherent universe could support life – a wide and a narrow way. On the wide view the answer seems to be ‘yes’, and on the narrow view the answer is definitely ‘no’.

It seems to me that there is very little significance to the narrow answer. On that view, the universe is fine-tuned for life, but only because ‘life’ is defined in terms of something which is itself tied to the physical fine-tuning of the universe. The meaning of ‘life’ piggy-backs on the fine-tuning of the physical variables. And this makes it kind of uninteresting. The same reasoning means that the universe is fine-tuned for gold as well as life, because the meaning of ‘gold’ is also tied to specific things which exist only because of the values of the variables, i.e. atoms and nucleus’, etc. Thus, if we want to say ‘fine-tuned for life’ and have that mean something other than just ‘fine tuned’, then we should opt for the wide view, not the narrow one.

But then if we go for the wide view, we are faced with another completely unknown variable. Just as we have no idea how many other potential pockets of coherence there may be in the sea of white noise, we also have no idea how many of them could give rise to something which answers to a very wide definition of ‘life’. It might be that there are trillions of hidden pockets of coherence, and that they are all capable of giving rise to life. We just have no information about that whatsoever.

 

 

5. Back to the argument

What the preceding considerations show is that the usual arguments taken to rule out the ‘chance’ explanation are missing something very important to the equation. I completely concede that our universe is extremely finely-tuned, to the extent that Craig explains. This means that if the values of the parameters were changed even a tiny amount, then we could not exist. However, because we don’t have any idea whether other combinations of values to those parameters would result in coherent universes, which may contain ‘life’, we have no way of saying that the chances of a universe happening with life in it are small if the values of these parameters were determined randomly. It might be that in 50% of the combinations there is sufficient coherence for life to be possible. It might be 90% for all we know. Even if it were only 1%, that is not very unlikely. Things way less likely happen all the time. But the real point is that without knowing these extra details, the actual probability is simply impossible to assess. Merely considering how delicately balanced our universe is does not give us the full picture. Without the extra distributions (such as how many possible arrangements give rise to coherent universes, and how many of those give rise to life) we are completely in the dark about the overall picture.

This makes the argument an instance of the base-rate fallacy. The example on Wikipedia is the following:

“A group of police officers have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random, and force the driver to take a breathalyzer test. It indicates that the driver is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?”

Because the ‘base-rate’ of drunken drivers is far lower than the margin for error in the test, this means that if you are tested and found to be drunk, it is a lot more likely that you are in the group of ‘false-positives’ than not. There is only one drunk person in every 1000 tested, and (because of the 5% margin for error), there are 49.95 false positives. So the chances that you are a false positive is far greater than that you are the one actually drunk person. It’s actually 1 in 50.95, which is roughly a probability of 0.02. Thus, without the information of the base-rate, we could be fooled into thinking that there was a 0.95 chance that we had been tested correctly, whereas it is actually 0.02.

With the fine-tuning argument we have a somewhat similar situation. We know that our universe is very delicately balanced, and we know that we could not exist if things were even slightly different. But because we effectively lack the base-rate of how many other possible combinations of values give rise to different types of life, we have no idea how unlikely it is that some such situation suitable for life could have arisen, as it were, by chance. As the above example shows, this rate can massively swing the end result.

6. Conclusion

The fine-tuning of the universe is a fact. This does not show that the universe is fine-tuned for life though. It also does not show that the universe must have been designed. It is impossible to know what the chances are that this universe happened ‘by chance’, because we do not have any idea about the relevant base-rate of coherent and (widely defined) life-supporting universes there could be. Thus, we have no idea if we can rule out the chance hypothesis, because we have no idea what the chances are without the information about the base rate.

Does the scientific method rest on the fallacy of affirming the consequent?

0. Introduction

There have been some rather strange suggestions from certain apologists recently about the nature of the scientific method, such as here and here. Prime among the criticisms is the claim that the scientific method rests on a fallacy called ‘affirming the consequent’. However, this is a strange claim for various reasons. Firstly, the criticism doesn’t engage with how philosophers of science actually talk about the scientific method. From around 1960, with the work of Thomas Kuhn, attempts at summing up the scientific method in a simple inferential procedure have been largely abandoned. It is now widely taken in the philosophy of science that there is no one simple pattern of reasoning that completely captures the scientific method – a phenomenon known as the ‘demarcation problem‘. So there is no simple logical model of inference which completely covers the everything in the scientific method. But this means that there is no simple model of fallacious inference which completely characterises the scientific method either. In short, the scientific method is too complex to be reduced to a simple informal fallacy.

However, if we pretend that this Kuhnian sea-change had not taken place, then we would most naturally associate the scientific method with the notion of inductive inference, with evidence being given in support of hypotheses (or theories).  However, induction is not guilty of the fallacy of affirming the consequent, as I shall show here.

After this wander through induction, I will try to explain what the motivations are for the apologetical critique and how that misses the mark by failing to appreciate that scientific advances are often made through falsification rather than verification.

  1. Affirming the consequent

The fallacy of affirming the consequent is any argument of the following form:

  1. If p, then q
  2. q
  3. Therefore, p

The inference from the premises to the conclusion is invalid, because it could be that the premises are true and the conclusion is false. For example, if p is false and q is true, then the premises are true and the conclusion is false. If you want a proof of this, let me know and I will provide it in the comments.

The reason it is a fallacy to use affirming the consequent is just that the argument is deductively invalid. The lesson is this: if you have a true conditional, then you cannot derive the truth-value of the antecedent from the truth of the consequent.

2. Science affirming the consequent

The idea that the scientific method commits the fallacy above can be explained very easily. We might think that theories makes predictions. This could be thought of like a conditional, where the theory is the antecedent and the prediction is the consequent; if the theory is true, then something else should be true as well. So, take a scientific hypothesis (such as ‘evolution is true’, or whatever), and a prediction that the theory makes (‘there will be bones of ancient creatures buried in the ground’, etc). Here we have the conditional:

If evolution is true, then there will be bones of ancient creatures in the ground.

Now we make a measurement, let’s say by digging in the ground to see if there are any bones there, and let’s say we find some bones. So the consequent of our conditional is true. The claim by the apologists is that when a scientist uses this measurement as support for the hypothesis, they are committing the fallacy of affirming the consequent, as follows:

  1. If evolution is true, there will be bones of ancient creatures in the ground
  2. There are bones of ancient creatures in the ground.
  3. Therefore, evolution is true.

This is the sort of reasoning that is being alleged to be constitutive of the scientific method, and, as it is stated here, it is an example of affirming the consequent.

The problem with this line of thinking is not that it isn’t fallacious (it is clearly fallacious), but it’s that it is not what goes on in science.

3. Induction

In 1620, Francis Bacon published a work of philosophy called the ‘Novum Organon‘ (or ‘new tool’), in which he proposed a different type of methodology for science than the classical Aristotelian model that came before (Aristotle’s collected scientific and logical works had been collected together under the title ‘Organon‘). One way of characterising the Aristotelian method was that one does science by applying deductive syllogistic logic to ‘first principles’ (which are synthetic truths about the world). An example of this sort of first principle in Aristotelian physics might be that all things seek their natural place. It is of the nature of earth to seek to be down, and air to seek to be up, etc. This is, supposedly, why rocks fall to the ground, and why bubbles raise to the surface of water.

Part of Bacon’s dissatisfaction with this idea is that it provides no good way of discovering what the first principles themselves are; it just tells us what to do once you have them. Aristotle’s own ideas about how one discovers first principles are not entirely clear, but it seems that he thinks it is some kind of rational reflection on the nature of things which gets us this knowledge. Regardless, Bacon’s new method was intended to improve on just that, and is explicitly designed as a method for finding out what the features of the world actually are, of discovering these synthetic truths about the world. His precise version of it is a bit idiosyncratic, but essentially he advocated the method of induction.

Without going into the details of Bacon’s method, the idea is that he was making careful observations about the phenomenon he wanted to investigate, say the nature of heat, trying to find something that was common to all the examples of heat. After enough investigation the observation of a common element begins to be reasonably considered as not just a coincidence but as constitutive of the phenomenon under question. (He famously carried out just such an investigation into the nature of heat and concluded that it was ‘the expansive motion of parts’, which is actually pretty close to the modern understanding of it.)

In other words, starting with a limited number of observations of a trend, we move to the tentative conclusion that the trend is in fact indicative of a law. So the general pattern of reasoning would be that we move like this:

  1. All observed a‘s are G
  2. Therefore, all a‘s are G

The qualification of ‘all observed’ in premise 1 does most of the work in this argument. Obviously, just observing one a to be G would not count as much support for the conclusion. Technically, it would be ‘all observed’ a‘s, but it wouldn’t provide much reason to think that the conclusion is true. In order for the inductive inference to have any force, one must try to seek out a’s and carefully test them appropriately to see if they are always G’s. One must do an investigation.

So if we make a careful and concerted effort to investigate all a’s we can, and each a we come across happens to be G, then as the  cases increase, we will become increasingly confident that the next a will be G (because we are becoming increasingly confident that all a‘s are G). This is inductive inference.

With an inductive argument of this form, it has to be remembered that the conclusion does not follow from the premises with deductive certainty. Rather than establish the conclusion as a matter of logical consequence from the truth of the premises, an inductive argument makes a weaker claim; namely that the truth of the premises supports the truth of the conclusion; the truth of the premises provides a justification for thinking that the conclusion is true, but not a logically watertight one. Even the best inductive argument will always be one in which the truth of the premises is logically compatible with the falsity of the conclusion. The best one can hope for is that an inductive argument provides very strong support for its conclusion.

3. Induction affirming the consequent?

It is this inductive type of argument which the apologetical critique above is trying to address, it seems to me. They are saying that this type of scientific argument is really of the following form:

  1. If all a‘s are G, then all observed a’s will be G               (If p, then q)
  2. All observed a’s are G                                                         (q)
  3. Therefore, all a‘s are G.                                                      (Therefore, p)

Notice that the 2nd premise and the conclusion (2 and 3) is precisely the inductive argument from above; we have just added an additional premise (1), the conditional premise, onto the inductive argument. This fundamentally changes the form of the argument. Now the argument has the form of the deductively invalid argument ‘affirming the consequent’.

There are three problems with this as a critique of scientific inferences. Firstly, we have added a premise to an already deductively invalid argument, and shown that the result is deductively invalid, which is kind of obvious. Secondly, it characterises scientific inferences as a type of deductive inference, when there is good reason for thinking that they are not (at least if scientific inferences are supposed to discover synthetic truths about the world). Lastly, the addition of the first premise seems patently irrational, and obviously a perversion of normal inductive arguments. Let’s expand on each of these three problems:

Firstly

All the apologetical critique has demonstrated is that one can make a fallacious deductive argument by adding premises to an inductive argument. However, inductive arguments are already deductively invalid. There is a fallacy called the inductive fallacy. It consists of taking an inductive inference to be deductively valid. So if you thought that all observed swans being white logically entailed that all swans are white, then you have committed the inductive fallacy, because you would have mistaken the relation between the premise and the conclusion to be one of deductive validity, when it is merely that of inferential support. All observed swans being white does provide some reason to think that they are all white, but the fallacy is in thinking that it alone is sufficient to establish with certainty that they are all white.

The addition of the first premise does nothing to undermine an inductive inference. It doesn’t make it more fallacious than it was in the first place. In a sense, this analysis commits the essence of the inductive fallacy, in that it says that scientific inferences are deductive when they are not; the claim that scientific inferences are guilty of affirming the consequent is itself an instance of the inductive fallacy.

Secondly

We could, if we wanted to, add premises to an inductive argument to make it deductively valid, as follows:

  1. If all observed a’s are G,  then all a‘s are G        (If p, then q)
  2. All observed a’s are G                                             (p)
  3. Therefore, all a‘s are G.                                          (q)

Now the addition of the first premise has made the argument deductively valid, as it is just an instance of modus ponens.

The apologists were reconstructing scientific inferences as fallacious deductive arguments. Yet, even if we patched up the argument, as above with a deductively valid version of the inference, we still face a problem. This is that now we have a deductive argument, just like with Aristotle’s methodology. The very same reasons would remain for rejecting it, namely that as a methodology it provides no new synthetic truths; it only tells you what follows from purported first principles, not what the first principles are. We would be back to Aristotle’s dubious idea of introspecting to discover them. Thus, it isn’t desirable in principle to reconstruct an inductive argument as a deductive argument – even if the result is deductively valid. This means that the claim, that scientific reasoning is a failed attempt at being deductively valid, is implausible; even if scientific reasoning succeeded in being deductively valid that would be no help. The lesson is that they are a different type of inference, not to be judged by wether they are deductively valid or not.

Thirdly

Our original inductive argument went from the premise about what had been observed to what had not been observed. The whole point of inductive arguments is to expand our knowledge of the world, and so this movement from the observed to the unobserved is crucial. It is essentially of the form:

The observed a‘s are G ⇒ All a‘s are G

However, the first premise of the affirming the consequent reconstruction gets this direction of travel the wrong way round. They have it as:

All a‘s are G ⇒ the observed a‘s are G

If we keep clearly in mind that the objective of the scientific inference is to expand our knowledge, the idea of starting with the set (all a‘s) and moving to the subset (the observed a‘s) is weird. How could it expand our knowledge to do so? It is an inward move. This conditional though has been added to an inductive inference by our apologetical friends as a way of forming the ‘affirming the consequent’ fallacy out of an inductive inference.  But given that it gets the direction of travel exactly backwards, why on Earth would anyone ever accept this as a legitimate characterisation of a scientific pattern of reasoning?

This last concern highlights the cynicism inherent in the affirming the consequent critique. It isn’t a way of honestly critiquing a problem in science, but just an instance of gerrymandering an inductive inference, i.e. the change has been made just for the purposes of making the inference look bad, rather than as a way of highlighting a genuine issue. There is no independent reason for adding it on.

4. Or is it?

It might be claimed that I am pushing this objection too far. After all, there is reason to add the conditional premise on to the inductive inference. This is because theories make predictions. If a theory is true, then the world will have certain properties. And we do find examples of experiments being done in which the positive test result is used as a way of confirming the theory. And if this is right, then it looks like a conditional, and we are saying that the antecedent is true because the consequent is. So are we not back at the original motivation for the affirming the consequent critique?

Well, no. We are not. Here’s why. Let’s take an example. The textbook example. In Einstein’s general relativity, one of the many differences with classical Newtonian physics is that gravity curves spacetime. That means that there would be observable differences between the two theories. One such situation is when the light from a star which should be hidden behind the sun is bent round in such a way as to be visable from Earth:

aaaaaaaaaskkklk.JPG

We already knew enough about the positions of the stars to be able to predict where a given start would be on the Newtonian picture, and the details of the Einsteinian theory provided ways to calculate where the star would be on that model. So, the Newtonian model said the star would be in position X, and the Ensteinian theory said it would be in position Y.

These experiments were actually done, and the result was that the stars were measured to be where Einstein’s theory predicted, and not where Newton’s theory predicted.

Is this an example of the affirming the consequent fallacy? It might look like it. After all, it may well look like we were making this sort of argument:

  1. If general relativity is correct, then the star will be at X  (if p, then q)
  2. The star is at X                                                                            (q)
  3. Therefore, general relativity is correct.                                (p)

However, the real development was not that general relativity was confirmed when these measurements were made, but that Newtonian physics was falsified. Corresponding to the above argument we have a different one:

  1. If Newtonian physics is correct, then the star will be at Y (If p, then q)
  2. The star is not at Y                                                                       (~q)
  3. Therefore, Newtonian physics is not correct.                        (~p)

The first argument is a logically invalid deductive argument; it is affirming the consequent. But the second argument is just modus tollens (If p, then q; ~q; therefore, ~p), and that is deductively valid.

What we learned with the measurement of light bending round the sun was not that general relativity was true as such, but that Newtonian physics, and any theory relevantly similar to it, was false. General relativity may still be false, for all the experiment showed us, but it showed us that whatever theory it is that does correctly describe the physics of our universe is going to be more along the lines of general relativity than Newtonian physics. We learned something about the world, even if we did not confirm with complete certainty that relativity was true. And this is what scientific progress is like.

5. Conclusion

It would be affirming the consequent if someone thought that the positive measurement deductively entailed that general relativity was true. If any scientist has gone that far, then they are mistaken. It doesn’t mean that the scientific method itself is mistaken however.

Induction, God and begging the question

0. Introduction

I recently listened to a discussion during which an apologist advanced a particular argument about the problem of induction. It was being used as part of a dialectic in which an apologist was pinning a sceptic on the topic of induction. The claim being advanced was that inductive inferences are instances of the informal fallacy ‘begging the question’, and thus irrational. This was being said in an attempt to get the sceptic to back down from the claim that induction was justified.

However, the apologist’s claim was a mistake; it was a mistake to call inductive inferences instances of begging the question. Unwrapping the error is instructive in seeing how the argument ends up when repaired. I argue that the apologetic technique used here is unsuccessful, when taken to its logical conclusion.

  1. Induction

Broadly speaking, the problem of induction is how to provide a general justification for inferences of the type:

All observed a’s are F.

Therefore, all a’s are F.

This sort of inference is not deductively valid; there are cases where the conclusion is false even though the premises are true. So, why do we think these are good arguments to use if they are deductively invalid? How do we justify using inductive inferences?

Usually, when we justify a claim, we either present some kind of deductive argument, or we provide some kind of evidential material. These are each provided because they raise the probability of the claim being true. So if I say that lead pipes are dangerous, I could either provide an argument (along the lines of ‘Ingesting lead is dangerous, lead pipes cause people to ingest lead, therefore lead pipes are dangerous’), or I could appeal to some evidence (such as the number of people who die of lead poisoning in houses with lead pipes), etc.

Given this framework, when we are attempting to justify the general use of inductive inferences, we can either provide a deductive justification (i.e. an argument) or an inductive justification (i.e. some evidence).

A deductive justification would be an argument which showed that inductive inference was in some sense reliable. But with any given inductive inference, the premises are always logically compatible with the negation of their conclusion. With any given inference, there is no a priori deductive argument which could ever show that the inference leads from true premises to true conclusion. You cannot tell just by thinking about it a priori that bread will nourish you or that water will drown you, etc. No inductive inference can be known a priori to be truth preserving. Thus, there can be no hope of a deductive justification for induction.

Let’s abandon trying to find a deductive justification. All that is left is an inductive justification. Any inductive inferences in support of inductive inference in general is bound to end up begging the question. Let’s go through the steps.

Imagine you are asked why it is that you think it is that inductive inferences are often rational things to make. You might want to reply that they are justified because they have worked in the past; after all, you might say, inductive inferences got human kind to the moon and back. The idea is that induction’s success is some evidential support for induction.

However, this is not so, and we should not be impressed by induction’s track record. In fact, it is a red herring, for suppose (even though it is an overly generous simplification) that every past instance of any inductive inference made by anyone ever went from true premises to a true conclusion, i.e. that induction had a perfectly truth-preserving track record. Even if the track record of induction was perfect like this, we would still not be able to appeal to this as a justification for my next inductive inference without begging the question. If we did, then we would be making an inductive inference from the set of all past inductions (which we suppose for the sake of argument to be perfectly truth-preserving) to the next future induction (and the claim that it is also truth-preserving). However, moving from the set of past inductive inferences to the next one is just the sort of thing we are trying to justify in the first place, i.e. an inductive inference. It is just a generalisation from a set of observed cases to unobserved cases. To assume that we can make this move is to assume that induction is justified already.

So if someone offers the (even perfect) past success of induction as justification for inductive inferences in general, then this person is assuming that it is justified to use induction when they make their argument. Yet, the justification of this sort of move is what the argument is supposed to be establishing. Thus, the person arguing in this way is assuming the truth of their conclusion in their argument, and this is to beg the question.

Thus, even in the most generous circumstances imaginable, where induction has a perfect track record, there can be no non-question begging inductive justification for future inductive inferences.

2. Does induction beg the question?

We have seen above that when trying to provide a justification for induction, there can be no deductive justification, and no non-question begging inductive justification. Does this mean that inductive inferences themselves beg the question? The answer to that question is quite clearly: no.

Inductive inferences are an instance of an informal fallacy, and that fallacy is called (not surprisingly): the fallacy of induction. The fallacy is in treating inductive arguments like deductive arguments. The irrationality that is being criticised by the fallacy of induction is the irrationality of supposing that because ‘All observed a‘s are F’ is true, this means that ‘All a‘s are F’ is true. Making that move is a fallacy.

Begging the question is when an argument is such that the truth of the conclusion is assumed in the premises. Inductive inferences do not assume the truth of the conclusion in the premises. For example, when you decide to get into a commercial plane and fly off on holiday somewhere, you are making an inductive inference. This is the inference from all the safe flights that have happened in the past, to the fact that this flight will be safe. The premise is that most flights in the past have been safe. Because (as an inductive argument) the premise is logically compatible with the falsity of its conclusion, the premise clearly does not assume that the next flight will be safe, and so the argument does not beg the question.

In fact, this actually shows that no argument can be both a) an inductive argument and  b) guilty of the fallacy of begging the question. So technically, the claim apologists that inductive inferences beg the question is provably false.

Of course, if we tried to justify induction in general by pointing to the past success of induction, that would be begging the question. But to justify the claim that the next flight will be safe by pointing out the previous record of safe flights is not begging the question, it is just an inductive inference.

So the apologist who made the claim that induction begs the question is just wrong about that. He was getting confused by the fact that justifying induction inductively is begging the question. But when we keep the two things clear, it is obvious that inductive inferences themselves do not, and indeed cannot, beg the question.

3. But what if it did?

Induction does not beg the question. That much is pretty clear. But what would be the case if induction was guilty of some other fallacy? Well, if each inductive inference itself was an instance of, say, a fallacy like circular reasoning (like begging the question) then it would mean that people act irrationally when they make inductions, like deciding it is safe to fly on a plane. Yet, it seems like people are not irrational when they make decisions like this. Sure, there are irrational inductive inferences, like that from the fact that the last randomly selected card was red that the next card will be red. But not all inductive inferences are like this, such as the plane example. So the person who wants to claim that inductive inferences are circular has to say something which explains this distinction between the paradigmatic rational inference (like flying) and less rational (or irrational) inductive inferences. Saying that they are all circular would leave no room to distinguish between the good and bad inductive inferences.

So the apologist owes us something about how it is that we can make apparently irrational inductive inferences which seem otherwise perfectly rational. In response to this, they could make the radical move and reject inductive inferences altogether. This would mean that they have doubled down on the claim that induction is circular; ‘Yes, it is circular’, they will say, ‘throw the whole lot out!’.

Yet they are unlikely to make this move. Each day, everyone makes inductive inferences all the time. Every time you take a breath of air, or a drink of water, you are inductively inferring about what will result from the previous experiences you had about those activities. You are inductively inferring that water will quench your thirst because it has done so in the past. So if the apologist wants to reject induction altogether then he must not also rely on it like this, or else be hypocritical.

More likely than outright rejection, they will try to maintain that although induction is irrational in some sense, it can still be done rationally nonetheless. After all, there is a big difference between inferring that the next plane will land safely, or that the next glass of water will nourish, than that the next card will be red. The former are well supported by the evidence, whereas the latter is not. This is what allows us to distinguish between rational and irrational inductive inferences. Not all inductive inferences are on par; some have lots of good evidence backing them up, and some have none.

So, if the apologist wants to maintain that all inductive inferences are guilty of begging the question, then (assuming they don’t deny the rationality of all induction) they would still owe us an account of what makes the difference between a rational inductive inference and an irrational inductive inference. And the account would have to be something along the evidential lines I have just sketched above. How else does one figure out what inductive inferences are rational and which are not, if not by appeal to the evidence? If some new fruit were discovered, you would not want to be the first person to try it for fear of it being poisonous. But if you see 100 people eat of the fruit without dying,  you would begin to feel confident that it wasn’t poisonous. This is perfectly rational. Thus, even if the apologist’s claim were correct, if they do not want to reject induction altogether, they end up in the same situation as the atheists, having to distinguish between good and bad inductive inferences based on the available evidence in support of them.

Even if the charge of irrationality stood (which it does not), it would have to be relegated to the status of not actually playing any role in distinguishing good inductive inferences from bad ones. This strongly discharges any of the real force of the point that was trying to be made.

The claim of the irrationality of induction was not true, but in a sense, it doesn’t make any material difference even if it is true; we still need to distinguish the better inductions from the worse ones.

4. Justifying induction with God

Some theists suggest that they have an answer to this problem which is not available to an atheist. The idea is that through his revelation to us, God has communicated that he will maintain the uniformity of nature. Given this metaphysical guarantee of uniformity, inductive inferences can be deductively justified. When we reason from the set of all observed a‘s being F to all a‘s being F, we are projecting a uniformity from the observed into the unobserved. Yet we were unable to justify making this projection. The theist’s answer is that God guarantees the projection.

We may initially suspect foul play here. After all, how do we know that God will keep his word? It does not seem to be a logical truth that because God has promised to do X, that he will do X. It is logically possible for anyone to promise something and not do it. Thus, it seems like we have just another inductive inference. We are saying that because God has always kept his promise up till now, he will continue to do so in the future. The best we can get out of this is an inductive justification for induction, which is just as question begging as the atheist version of appealing to the past success of induction. I think this objection is decisive. However, let’s suspend this objection for the time being. Even if somehow we could get around this, maybe by saying that it is a necessary truth that God will not break his promise or something, I say that even then we have an insurmountable problem.

5. Why that doesn’t help

The problem now is that while God may have plausibly promised to maintain uniformity of nature, he has not revealed to us precisely which inductive inferences are the right ones; i.e. the ones which are tracking the uniformity he maintains, as opposed to those which are not. God’s maintaining the uniformity of nature does not guarantee that inductive inferences are suddenly truth-preserving. Even if it were true, it did not stop the turkey making the unsuccessful inference that he would get fed tomorrow on Christmas eve, and it did not stop those people who boarded that plane which ended up crashing. Even if God has maintained uniformity of nature, and even if he has revealed that he has done so to us in such a way that we can be certain about it, we are still totally in the dark about which inductive inferences we can successfully make.

So let’s suppose we live in a world where God maintains the uniformity of nature, and that he has told us that he does so. When faced with a prospective inductive inference, and trying to decide whether it is more rational (like the plane ride) or irrational (like the card colour) to make the inference, what could we appeal to in order to help us make the distinction? We cannot appeal to God’s word, as nowhere in the bible is there a comprehensive list of potential inductive inferences which would be guaranteed to be successful if made (which would be tantamount to a full description of the laws of nature). Priests were not able to consult the bible to determine which inductive inferences to make when the plague was sweeping through medieval Europe. They continued to be unaware of what actions of theirs were risky (and would lead to death) and which ones were safe (and would lead to them surviving). The only way to make the distinction between good inductive inferences and less good ones is by looking at the evidence for them out there in the world. Knowing that God has guaranteed some regularity or other is no help if you don’t know which regularity he has guaranteed.

The problem is that we are unable to determine, based only on a limited sample size, whether any inductive generalisation we make is actually catching on to a uniformity of nature, or whether it was just latching on to a coincidence. When Europeans reasoned from the fact that all observed swans were white to the conclusion that all swans were white, they thought that they had discovered a uniformity of nature; namely the colour of swans. They didn’t know that in Australia there were black swans. And this sort of worry is going to be present in each and every inductive inference we can make, even if we postulate that we live in a world where God maintains the uniformity of nature and has revealed that to us. The problem is primarily epistemological; how can we know which inductive inference is truth-preserving? The apologist’s answer is metaphysical; God guarantees that some inductive inferences are truth-preserving (i.e. the ones which track his uniformities). For the apologist’s claim to be of any help, it would have to be God revealing to us not just that he will maintain the uniformity of nature, but which purported set of observations are generalisable (i.e. which ones connect to a genuine uniformity). Unless you know that God has made the whiteness of swans a uniformity of nature, you cannot know if your induction from all the observed cases to all cases is truth-preserving. And God does not reveal to us which inductive inferences are correct (otherwise Christians would be have a full theory of physics).

In short, even if we go all the way down the road laid out by the apologist, they still have all the same issues that atheists (or just people of any persuasion who disagree with the theist’s argument laid out here) do. They have no option but to use the very same evidential tools that atheists (etc) do to make the distinction between the more rational and less rational inductive inferences.

6. Conclusion

The apologist’s claim was that inductive inferences were question begging. I showed that this is not the case (and that in fact it could not be the case). Then I went on to see what would be at stake if the apologist had scored a point. We saw that still the apologist would need to distinguish better and worse inductive inferences, just like the atheist, and would have no other option but to use evidence to make this case. Then we looked at the idea that God guarantees that there would be some uniformity of nature. We saw that this claim does not make any material difference to the status of inductive inferences, and so cannot be seen to be a justification of induction in any real sense.