Hume
Here’s a quote from David Hume in section 10 part 1 of his classic Enquiry Concerning Human Understanding (with the part incased in single quotes being what some call “Hume’s Maxim”):
The plain consequence is (and it is a general maxim worthy of our attention), ‘That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind that its falsehood would be more miraculous, than the fact, which it endeavours to establish: And even in that case there is a mutual destruction of arguments, and the superior only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior.’ When anyone tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that fact, which he relates, should have really happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle. If the falsehood of his testimony would be more miraculous, than the event which relates; then, and not till then, can he pretend to command my belief or opinion.
For some, when talking about whether there is sufficient evidence for miracles it might be tempting to say, “Hume proved there can’t be evidence for it; look it up.” One reason this temptation should be resisted is because the theist can play the same game and say, “Hume’s case against miracles is an abject failure; look it up,” and we will have gotten nowhere.
There’s an annoying maneuver I’ve seen used by theists and atheists alike that I call “the literature toss.” Instead of explaining why they think an opponent’s position is wrong, they say something like, “Read this book.” Don’t get me wrong, reading books is great. Theists should read pro-atheism books and atheists should read pro-theism books to better understand different points of view and think critically. But a literature toss is often not a good substitute for real dialogue.
An atheist can throw a Hume book at the theist, the theist can throw a book at the atheist called Hume’s Abject Failure by philosopher John Earman, and the atheist can throw a book at the theist called A Defense of Hume on Miracles by philosopher Robert J. Fogelin. To those two people I say, “When you’re done throwing books at each other, maybe you can engage in some real conversation like adults.”
But there’s another reason why simply saying “look at Hume” won’t work here; Hume is (to at least some degree) too ambiguous. When Robert J. Fogelin critiques John Earman in chapter two of A Defense of Hume on Miracles he doesn’t critique Earman’s math (and Earman makes extensive use of mathematics to prove various points) but rather how Earman interpreted Hume. The Stanford Encyclopedia of Philosophy notes that the maxim is “open to interpretive disputes” and on the passage as a whole that I quoted, the article says the “interpretive issues are too extensive to summarize.” So instead of saying, “Hume showed we can’t have evidence against miracles” you should instead just put forth the argument itself, because your interpretation of Hume might not be their interpretation.
Actually, some scholars believe Hume didn’t intend to make a proof that we in principle can’t have sufficient testimonial evidence of miracles in his Enquiry, and that rather his position is that no testimony has in fact provided sufficient evidence of miracles. All that said, here is part of what I think Hume is trying to say in the passage above: when we encounter testimony of a miracle, we need to consider which is more likely: that the testimony is false (“this person should either deceive or be deceived”) or that the testimony is true. Even if we judge it to be true, whatever evidence we have that the testimony is false mitigates the evidential force of the pro-miracle testimony (“and the superior [evidence of the testimony] only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior [evidence that the testimony is false]”). Sounds simple enough, and while the theist may say that it is more likely that a person’s testimony is telling the truth, due to a (presumed) reliability of a given witness, there’s some mathematics that throws a wrench into the gears of the theist’s thinking.
Surprising Math
To illustrate the general problem, consider the following scenario I'll call the “Taxicab Hit.” Suppose we have two taxicab companies, one which uses taxis painted blue and another which uses taxis painted green. On the roads, 85% of the taxicabs are green, and 15% are blue. One night, a taxi hit another car and drove off, with the eyewitness saying it was a blue cab. Under the conditions like those on the night of the car accident, the witness correctly identifies the color 80% of the time (and thus failing 20% of the time). The question: what is the probability that the witness is correctly identifying the color of the taxicab in the Taxicab Hit scenario?
Many would say that the probability is 80%, but that turns out to be a mathematical mistake. To understand why I’ll quickly introduce a bit of math, starting with some basic symbolism you might already be familiar with:
- P(A) = the probability that A is true.
- P(A|B) = the probability that A is true given that B is true.
In the case where events B and C are mutually exclusive (i.e. both couldn’t be true) and exhaustive (i.e. one of them had to have happened), the following equation is true due to something called Bayes’ theorem:
P(B|A) = | P(B) × P(A|B) |
P(B) × P(A|B) + P(C) × P(A|C) |
Let’s use the following symbols:
- W = the witness identifies the taxi was blue.
- B = the taxi that hit the car is blue.
- G = the taxi that hit the car is green.
- P(G) = the prior probability that the taxi was green (i.e. the probability that the taxi was green prior to our consideration of the eyewitness evidence). Thus, P(G) = 0.85
- P(B) = the prior probability that the taxi was blue (i.e. the probability that the taxi was blue prior to our consideration of the eyewitness evidence). Thus, P(B) = 0.15.
- P(B|W) = the probability that the taxi was blue given that the witness reports that it is blue.
- P(W|G) = the probability that the witness reports the color as blue when the color was green.
Remember: the witness correctly identifies a taxi’s color 80% of the time; regardless of whether the taxi’s actual color is green or blue, the witness gets it right 80% of the time and thus wrong 20% of the time. That means that P(W|B) is 80% (0.8) and P(W|G) is 20% (0.2) So our equation is this due to Bayes’ theorem:
P(B|W) = | P(B) × P(W|B) |
P(B) × P(W|B) + P(G) × P(W|G) |
And plugging in our values gives us this:
P(B|W) = | 0.15 × 0.8 | ≈ 0.41 |
0.15 × 0.80 + 0.85 × 0.2 |
Strange but true, even with the evidence of the witness’s testimony the taxicab is probably not blue. If you made this mistake of thinking the taxicab was probably blue in light of the evidence of the testimony, you’re not alone. This taxicab question was famously asked to people by Amos Tversky and Daniel Kahneman. The thing that throws people off is the base rates of the green and blue taxis (85% of the green taxis, and 15% of the blue taxis), which in math terms are the “prior probabilities.”
If you want a more concrete way to look at it, suppose there are 100 taxicabs, 85 of them are green and 15 are blue. Now suppose our eyewitness correctly identifies the color 80% of the time, which means he incorrectly identifies them 20% of the time. Since there are 85 green cabs and he incorrectly identifies them 20% of the time, this means he incorrectly reports 17 green taxicabs as blue (since 85 × 0.20 = 17) even though there are only 15 genuinely blue taxicabs! To make matters worse, of the 15 real blue taxicabs, he correctly identifies only 12 of them (since 15 × 0.80 = 12). So among the taxicabs he reports as blue, 17 of them are not blue and only 12 of them are actually blue. Because so few of the taxicabs are actually blue, an 80% success rate is not enough to overcome the base rate of blue taxicabs.
Probability and Miracles
The same principle holds for miracles; when the prior probability is low enough, even a highly reliable witness might not be enough to overcome the prior probability. To give a concrete example, let us optimistically assume that the prior probability of a particular person miraculously rising from the dead is one in a million. That’s absurdly optimistic when you consider that billions of people die without miraculously rising from the dead, but let’s go with that absurdly high prior probability for now. Now suppose we have good old reliable Pete who is always 99.9% reliable (which means he’s wrong 0.1% of the time). Now suppose we use the following symbols:
- W = Pete reports as an (alleged) eyewitness of the miracle having ocurred.
- M = the miracle occurred.
- ¬M = the miracle did not occur.
- P(M) = the prior probability of the miracle; thus P(M) = 0.000001.
- P(¬M) = the prior probability of the miracle not having occurred; thus P(¬M) = 0.999999.
- P(W|M) = the probability that Pete reports the miracle given that the miracle did occur; thus P(P|M) = 0.999.
- P(W|¬M) = the probability that Pete reports the miracle given that the miracle did not occur; thus P(P|¬M) = 0.001.
Our equation is this:
P(M|W) = | P(M) × P(W|M) |
P(M) × P(W|M) + P(¬M) × P(W|¬M) |
And plugging in our values gives us this:
P(M|W) = | 0.000001 × 0.999 | ≈ 0.000998 |
0.000001 × 0.999 + 0.999999 × 0.001 |
So the probability that the miracle actually occurred, even given the absurdly high prior probability, is still only about 0.1%, so we can be about 99.9% sure that the “miracle” is baloney. Of course, the base rate for miraculous resurrections is much, much lower than one in a million. You have my permission to come up with your own miracle scenarios and do some math yourself.