CPBD 083: Richard Carrier – Historical Method and Jesus of Nazareth

by Luke Muehlhauser on January 2, 2011 in Historical Jesus,Podcast

(Listen to other episodes of Conversations from the Pale Blue Dot here.)

Today I interview historian Richard Carrier about historical method and Jesus of Nazareth.

An important followup to the content of this interview is here.

Download CPBD episode 083 with Richard Carrier. Total time is 1:07:01.

Richard Carrier links:

Links for things we discussed:

Note: in addition to the regular blog feed, there is also a podcast-only feed. You can also subscribe on iTunes.

Transcript

Transcript prepared by CastingWords and paid for by Silver Bullet. If you’d like to get a transcript made for other past or future episodes, please contact me.

LUKE: Alright. Today I’m speaking with Dr. Richard Carrier and we’re going to talk about his upcoming books on historical method and the resurrection of Jesus. Richard, welcome back to the show.

RICHARD: Glad to be here. Always a joy.

LUKE: So, Richard, you’ve got this two volume work coming out. The first volume is “Bayes Theorem And Historical Method,” correct?

RICHARD: Right.

LUKE: The second one is something on Jesus?

RICHARD: The second one will be the title I was originally going to have for the full set which is “On The Historicity Of Jesus Christ”.

LUKE: Right.

RICHARD: That’s what it will be on… It will just be addressing that question right head on. The first one does have a subtitle which is like, what the hell is Bayes’ Theorem and historical method have to do with Jesus? But it’s “Bayes’ Theorem And The Historical Method: The Failure Of Historicity Criteria In The Study Of Jesus”. What I did was, as I was working this out I realized if… The biggest problem with making this book on the historicity of Jesus Christ is cutting material.

There are so many questions and arguments and evidence to present that there’s just no way to do it in a book that anyone’s going to read. To illustrate this, is if you look at Earl Doherty’s two books. You remember The Jesus Puzzle?

Nice short book, pretty good, a few flaws but other wise a good work on the historicity of Jesus on the opposite side of it. Then he came out with his new version which is the “Jesus Neither A God Nor Man”. Which is this gigantic 800 page tome that I’m sure will terrify anyone who even sees the book and would be very hard to get through.

It reminds me of some of the other works of Jesus scholars. Like Eisenman’s books for example, just look terrifying huge and dense words and everything. I didn’t want to do that but I didn’t want to cut so much material that people could say, “Well, you didn’t cover this or you didn’t cover that.”

So, as I was going through and completing the book I realized I had pretty much most of the methodology stuff done. And I realized well actually that’s a good package argument in itself. Part of the argument I was making is that Jesus studies needs a new method.

In fact, the method that I’m advocating is a method all historians should use. It actually underlies all historical methods. So, making a broader argument that would be applicable to Jesus studies. I took all the methodology stuff and put it in there.

What I kept from the Jesus studies is one chapter demonstrating… I’m just following up on what other Scholars have done, demonstrating that the current methodology is bankrupt, it’s invalid. It’s this what they call “criteria of historicity that they’re using.

The most common is like the argument from embarrassment, the criterion of double dissimilarity and all these fancy names. Some are obviously the argument from vividness and what not. There’s a list, I come out with about 31 of these things [laughter] with overlap and different uses of words. They’re probably may be a 100 that have been advocated.

But they’d all probably reduce to at least 31, maybe even less than 31. There’s probably some more where I’ll have to go dig up the ones I’ve identified. I go through point by point showing how they’re logically invalid. They don’t lead to the conclusions that they purport to.

LUKE: Wait. So, Richard what you’re talking about here is, if somebody is familiar with Christian apologetics from the past 30 years, you’ve probably heard people arguing that the resurrection of Jesus really happened because the source materials meet these certain criteria for historicity. Things like the criterion of embarrassment, the criterion of dissimilarity, that kind of thing. Could you give examples of what a few of those are just so that people know what we’re talking about?

RICHARD: Yes. I’ll give you one that I’ve already written about many times. It’s in “Not The Impossible Faith”, it’s in “The Empty Tomb” in many chapters in there. One of which is the argument for embarrassment. The basic argument is, and I spent a lot of time on this in this forthcoming book. It’s the one key example they use and I completely destroy it. The basic argument is that Christian authors would not make up or preserve stories that painted them in a bad light or that made their mission difficult. They’re not going to make it hard for themselves.

So, when they do something like put women as the discoverers of the empty tomb, the first to discover the empty tomb, when women had such a low status and no one trusted their testimony, this is the argument. This is the way they pose the argument. Why would you do that?

If you’re going to make up a story about the empty tomb you put men there because people trusted men and not women. There are a number of problems with this. The whole argument is logically invalid to begin with.

But it’s also factually incorrect and this is one thing I’ve pointed out. That this claim that woman’s testimony wasn’t trusted is simply not true. In “Not The Impossible Faith” I have a whole chapter demonstrating that.

The arguments that have been advanced for it are completely fallacious and wrong. So, the criterion doesn’t even apply. What I do in this book is not only do I point out that often happens but that the criterion itself is defective.

There are many examples and I give several where completely false things that made great difficulties for religious people were invented none the less. We know they’re invented. One of them is the castration of Attis. There’s this Attis cult, there’s this whole religion going on around the time of Christianity, it originates before Christianity.

It spreads through Rome like 100 BC at least, so it’s all over the place. In this there’s the myth that Attis kills… this is a God, the God Attis. Son of a God in fact even, kills himself by cutting off his balls. [laughter] In honor of this, his priests cut off their balls. [laughter]

Now in Roman culture this kind of emasculation is one of the most embarrassing insulting things that you do. It totally dehumanizes you and this whole sort of masculine, macho culture is like that. That’s the worst thing you could do and why on earth would you ever worship a eunuch as your savior?

That’s just disgusting and foul and contrary to all nature. A lot of Romans make this argument. It’s just ridiculous and absurd. It’s like Seneca has a line… If it wasn’t for the vast number of the mad throng you would be certain they were crazy, right?

It’s the only argument you have for them being sane is that there’s so many of them. Other than that there’s no argument really. So, clearly there was no son of God Attis who cut off his balls. That’s clearly a myth, it’s completely made up.

So, this idea that people wouldn’t invent embarrassing things to sell their religion is intrinsically false. It’s the inference, the principle of inference is wrong to begin with. So, the reality is more complicated than that.

In this book that’s coming out on “Bayes’ Theorem And The Historical Method” I get more sophisticated. The argument for embarrassment is similar to a legal argument that allows hearsay: statements against interest. It’s called statement against interest and there’s already a whole legal literature built up on the application.

It’s based on the same idea that normally hearsay can’t be admitted in court because you can’t question the person who said it. That’s part of the fourth amendment that says you’re entitled to interrogate your witnesses or something like that. Or at least it entails that.

I can’t remember exactly the wording but the point is they’ve interpreted the fourth amendment as meaning it’s unconstitutional to present witnesses against you that you can’t inquire or that you can’t interrogate.

So, hearsay can’t get in, but there are several exceptions to the hearsay rule. Not all of them are logical but none the less they’re traditional and are locked in the law.

One of them is this idea of a statement against interest, that person who said it couldn’t possibly have said it unless they believed it.

Like it’s so contrary to their interests and that in itself is enough to support the statement being true and so then you can admit that testimony.

So, it’s a really complicated, convoluted thing as to in legal theory but it’s a basic similar principle.

And legal scholars have questioned this saying it’s not even logically valid in terms of that doesn’t make the testimony any more reliable. The way that this legal theory is washed out in terms of case law and everything is that, statements against interest are still admissible.

But usually juries are instructed to say that you can’t assume that it’s true, that you have to critically evaluate the evidence like any other.

And even some scholars say we shouldn’t even be letting these things in because there’s too much opportunity for bias and influence and so on.

Various things have been said and I point out the similarity between them, discuss the legal literature and all that. Then I construct a logically valid version of the argument that’s broader that could be applied to law and history and everything.

And show that it only applies in very specific circumstances, none of which are satisfied in the gospels for example.

So, I go through that, and not only do I do that but I also show the correct inference, the correct structure of inference has to be Bayesian, meaning it has to conform to Bayes’ Theorem. That’s one of the leading main argument of this coming book and it’s probably the most controversial and unfamiliar part of that book.

I use the Jesus study as an example in fact because it will also be useful in my next book. I also want to make this argument to historians generally so the first book will be broadly of interest. It won’t be just for Jesus studies.

So, I defend that thing. The one chapter I have refuting all the historicity criteria is like the deconstructive part of the whole project, because once you see that their methods are wrong they don’t have any valid basis…

It doesn’t follow that just because their arguments are fallacious that their conclusions are wrong. It is possible to have completely fallacious arguments and the conclusions still just happen to be true. This first volume won’t refute the historicity of Jesus. It will just show that the fundamental basis for it is unsound.

LUKE: Yeah.

RICHARD: Now, historians could come along, take my own method, and then create a sound argument for the historicity of Jesus. In fact, I encourage that if it can be done. In fact, that needs to be done frankly, especially if you think historicity is defensible and true you need to be able to take this valid method and defend it to show that it is true. That’s the purpose of the first volume.

The second volume is going to show that it isn’t true.

[laughter]

RICHARD: By analyzing the facts. That will be controversial in its own right.

LUKE: Yeah.

RICHARD: And even that, I’m going to pose it as “this is just a hypothesis. Here’s why I think it’s right. Now it’s out there, now it’s rigorous. A professional historian has written it. I’ve got a logically valid method. As far as I can tell, the facts are all right. So tell me why I’m wrong?”

LUKE: Yeah.

RICHARD: I’m hoping that will start a debate in the academic community that won’t bypass the sort of… the standard historicity stuff has been outside academia, often times not methodologically rigorous.

LUKE: Yeah.

RICHARD: … easy to dismiss on those grounds alone. Fallaciously, but none the less easy. I want to make the argument more serious and finally get to a point where we can either agree that it’s undecidable or that historicity is defensible by a logically valid method or that its not. I’m hoping maybe 20 years from now my book will have started a debate that will end within 20 years or we will have a consensus on one of those three end points. Even though my book will argue for one of those endpoints, I don’t assume that’s where it’s going to end up, but I think it’s probable that it will.

LUKE: So what you’re really doing in your first volume is talking about how we need to develop a rigorous method for getting at the truth about what probable happened in history. You’re certainly not satisfied with the types of methods that are used in historical Jesus studies much of the time.

RICHARD: Yeah.

LUKE: But you are also presenting something that is somewhat new to the practice of history anyway, not very new to the practice of science. Could you talk a little bit about that and the differences between how history is currently done and what you are proposing in this upcoming book?

RICHARD: Yeah and generalizing too to other fields other than Jesus studies. One of the things I do in one of the chapters in this first book is I actually take all of the major common historical methods that are used in general, not just in Jesus studies, and show that when they are valid, and they often are, they are completely described by Bayes’ Theorem so that in fact they are really just iterations of Bayes’ Theorem stated in different ways.

LUKE: Right.

RICHARD: In fact, the only way to make them valid is to define them in Bayesian terms. I show examples. Even methods that I once advocated. For example, in Sense And Goodness Without God I have a section on historical epistemology and I give two common methodologies used by historians. One is the argument to the best explanation and the argument from evidence. I show the structure of them. They look great. You are like, “Oh, yeah, those are really sound methods. You’re right. That’s probably the best way to do history.” A lot of historians would agree.

LUKE: Right.

RICHARD: Other than post modernists, all historians would agree with those two methods. They might even propose some of their own that are similar. The problem with those methods is when you start to get down to the nitty-gritty; they don’t solve two particular problems. One is what I call the threshold problem. They list criteria that evidence of a theory must meet for it to be true, but the methods themselves don’t say when it’s true.

LUKE: Right.

RICHARD: Does each criteria count the same? What if there is some contradiction, it fails some criteria and passes others. What if you have two competing theories? How do you adjudicate…? At what point does something being more probable become actually probable. That’s something that I run into and point out in the book. A lot of historians, “Well this makes the historicity of Jesus more probable therefore Jesus probably existed” which is a non sequitur.

LUKE: Right.

RICHARD: I can point out… You could take a 1% chance and make it 10% and you have multiplied the probability 10 times which is a huge increase in probability but it is still 90% chance you’re wrong.

LUKE: Yeah.

RICHARD: Ten percent is not enough to believe… anyway; I take these historical methods and show that they reduce to Bayes’ Theorem. I think that’s why this will be broadly appealing to historians generally because you can see “oh yeah, that’s a good sound method. It’s like what I do.” Then I show that it’s really Bayes’ Theorem and then they will kind of understand “oh, that’s what logically validates that”.

LUKE: Right.

RICHARD: And the threshold problem is an example of that. Bayes’ Theorem solves the threshold problem. There are lot of questions I get asked like “what about this, what about that, isn’t difficult, and here’s why I don’t think it would work, etc.” The book deals with all of those.

LUKE: Great.

RICHARD: I’ve interacted with historians and philosophers so that I know those standard objections and I answer them all. That’s the point of the book. There’s another book I think called The Logic of Scientific Method or The Logic of Science where it makes this argument where it makes the argument that all scientific method reduces to Bayes’ Theorem. I am doing this now for history, and I think that unites…

LUKE: By E.T. Jaynes?

RICHARD: What’s that?

LUKE: By E.T. Jaynes?

RICHARD: Yeah, I think it’s Jaynes, right?

LUKE: Probability, Theory: The Logic of Science.

RICHARD: That’s it. Exactly.

LUKE: So you are doing the same thing for history?

RICHARD: Exactly. I’m uniting humanities and sciences.

LUKE: Yeah.

RICHARD: Which I think is unpopular to do among members of the humanities, but I think it is more attractive to members of the mathematics community. They see more value in making humanities mathematical where as people in the humanities are terrified of math and dogmatically opposed to even the idea…

LUKE: [laughs]

RICHARD: … that they have to think in math or do anything rigorous to get their conclusions.

LUKE: That’s too bad.

RICHARD: Yeah, oh, I know. John Allen Paulos has a book called Innumeracy that illustrates the fact that this fear of math is actually a real threatening problem to a democratic society. We shouldn’t have this. Humanities shouldn’t be encouraging this. Math should be much more fundamental to how we conduct our lives and to our education system. Even the way we teach math is virtually useless. We are basically creating human calculators to be future engineers, most of whom will never be future engineers so it is pointless.

Even geometry is taught like “here are the procedures” and you are just a human calculator. You don’t really understand what you’re doing or why it even matters.

LUKE: Yeah.

RICHARD: There is so much mathematics that’s important. One of the examples I give that is related to Bayes’ Theorem is simply statistics in and of itself. We are politically manipulated constantly by the use of statistics. It’s routine. You’ve got poll numbers just constantly. All kinds of statistical arguments are made in the media and in the political arena.

If you don’t even understand how statistics are generated and the ways they can be manipulated or erroneous, which is all mathematical knowledge. You need this knowledge as a democratic citizen. It’s necessary.

It’s probably the most important mathematics that could be taught in high schools and yet, as far as I know, no high school teaches it.

LUKE: Yeah.

RICHARD: Or if they do it’s an elective that only nerds would take, right?

LUKE: [laughs]

RICHARD: It should be fundamental. It’s way more important than geometry, and I think geometry is pretty important.

LUKE: Yeah.

RICHARD: It’s more important than algebra even though you need some basic algebra to do statistics. When you take an algebra course, most of what you are doing in these bizarre quadratic equations, who needs that? No one needs to know how to solve a quadratic equation unless you are going to go into a field that specifically requires that. Even those fields are becoming fewer and fewer because you just have a computer do it for you. You don’t really need to do it manually.

I think that’s a waste of students’ time. I think if you’re going to do Algebra, you should have just the basic algebra you need to do all these other practical applications like statistics and so forth.

LUKE: Yeah.

RICHARD: In fact, I think you can have a course on algebra that starts with algebra and ends with statistics.

LUKE: Yeah.

RICHARD: It isn’t just the one thing.

LUKE: Yeah.

RICHARD: Although I think you should have more than one year in high school for sure, because you will forget just one year. And it should be applied. They make you a human calculator, follow the procedures, and get the answer. It should be, “here’s an actual real world problem that can be solved with this”.

LUKE: Yeah.

RICHARD: Let’s show you how it’s solved. Even pick one where here’s where someone is lying but it doesn’t look like they are lying.

LUKE: Yep.

RICHARD: And if you understand the mathematics you will understand why you are being lied to. That’s the kind of thing I think would be much more useful to teach. It’s a shame that humanities people are afraid of this or don’t get it or give it lip service but when the chips are on the table they don’t really…

LUKE: Yeah.

RICHARD: They are not very accepting of the idea.

LUKE: So, what’s the value of Bayes’ Theorem and why does it matter so much for science and in particular for your field, history?

RICHARD: Oh, yes. Well it matters, and this is a point that I make in the book, that all valid reasoning is modeled by Bayes’ Theorem. Now most people, when they study logic, if they’ve studied logic, and not many people have, but – well, more and more people have, but…

When you study logic you’re usually taught the basic syllogism: you have premise, premise, conclusion, you know, major premise, minor premise and a conclusion follows. And they think that’s all logic is.

But you can easily show that those methods are flawed in most real-world cases because you don’t have absolute certainty, and that’s the thing: that the conclusion depends on both premises being true…

LUKE: Yeah…

RICHARD: But you don’t know that. You know that to a probability, and not everything has the same probability.

LUKE: Yeah.

RICHARD: So what happens when the major and minor premise both have a probability of being true and those probabilities aren’t the same? What then? How does the conclusion come out? What’s the probability of the conclusion?

LUKE: Yeah.

RICHARD: Standard syllogistic logic cannot give you the answer to that. And yet, that’s how almost all reasoning in the real world proceeds.

LUKE: Yeah.

RICHARD: Because you have probable, different degrees of probability of different premises and you need to arrive at a conclusion. So, standard logic is useless in that sense. It’s useful conceptually to get concepts through and understand where flaws are, but when it comes to real-world reasoning, like “should I believe this claim or not?”, standard syllogistic logic is useless, essentially.

The solution is Bayes’ theorem, and this was developed in the eighteenth century – the eighteenth and early nineteenth century – ironically shortly after David Hume wrote his On Miracles, where he’s just an inch away from getting it.

Because his argument in that is almost Bayes’ theorem, it’s not quite. The logic of the argument is there. In fact all the flaws people have pointed out in Hume’s argument are exactly the flaws that are corrected by Bayes’ theorem.

LUKE: Right.

RICHARD: And so, Thomas Bayes came out with this shortly after that and gave a rigorous defense – I don’t know if he ever read Hume or anything – not of Hume’s thing but he gave a rigorous defense and formal logical proof of the validity of the theory, which is an important development. Because it means that the conclusion follows from the premises necessarily, and it’s completely valid.

And I make the point in the book that anything that goes against a logically valid theorem can’t itself be logically valid, and I show this logically that, that’s the thing.

So, Bayes’ theorem becomes a very powerful tool to test things, and because it works this way, you can state the premises in terms of probability and it shows you how to calculate the probability of the conclusion once you’ve set the premises.

LUKE: Yes.

RICHARD: That’s the value of Bayes’ theorem, and scientists are almost unanimous in agreement – there’s some disagreement as to how broadly applicable it is – but most scientists agree not only that Bayes’ theorem is a correct description of scientific method, but that it can probably replace in many cases the methods that are used. There’s recently – and I cite this in the book – there’s a recent article that came out pointing out that a lot of the statistical arguments in science journals – like for medicines and treatments and things like that – are logically invalid, and they point out the kinds of common errors that even expert scientists are making in statistics.

And it’s a celebrated article because there wasn’t anybody who could argue against it; they’re basically right.

LUKE: [laughs]

RICHARD: And they, one of the things they suggest is usually if you point out a problem, you kind of: ‘what’s a solution’. They didn’t defend a solution but they said the solution is probably Bayes’ theorem, because Bayes’ theorem solves all of these problems. So, that’s an example of how – that’s the kind of dialog going on in the scientific community, but it needs to be going on inside any other community that claims to be making factual truth claims…

LUKE: Yeah.

RICHARD: … because those are the same kinds of things.

LUKE: Yeah.

RICHARD: .You have probabilistic premises, you need to figure out how probable the conclusion is; it’s the exact same situation. It’s described by the exact same logic, so you need to understand it to go forward.

LUKE: Yeah. If what you’re doing is taking E.T. Jaynes to history, I mean, who’s going to argue with that? I don’t know, it seems like such an obvious thing, but I guess it does have so much resistance in the humanities, but that’s an argument you’re going to win, Richard [laughs] .

RICHARD: [laughs] I agree I’ll be right, I think it’s pretty clear; but winning is, of course, the question of persuading, really. So, how many people will be persuaded? I’m sure there will be a lot of resistant people. Well, just look in the history community, at post-modernists. Most historians are fairly certain that the post-modernists are full of shit, and that’s pretty much the consensus in history now. And yet there are still postmodernists, right?

There are still postmodernist historians; there are feminist post-modernist historians that are the laughing stock of most of the history community.

And yet they have tenured professorships and get academic press publications and things like that. So, I don’t think I’ll convince everyone, but the only people who won’t be convinced are people who are irrationally, dogmatically opposed to what I’m arguing.

LUKE: Richard, I want to ask you about the logical foundation for Bayes’ theorem. Cox and Jaynes and some other people have argued that Bayes’ theorem is really just an extension of the foundations of mathematical logic and can be validated. Do you think that project has succeeded?

RICHARD: Yes, oh yeah. They’re completely correct in that. I mean, that’s one of the things that Jaynes accomplished was showing that the syllogistic logic and Bayes’ theorem can be reduced to each other, just that syllogism becomes more complicated; in fact it becomes complicated to exactly the point that it becomes Bayes’ theorem.

LUKE: Yeah.

RICHARD: So he’s showing that sound syllogistic logic using probabilistic premises is Bayes’ theorem. I mean, that’s his argument, is that they’re one and the same, really.

LUKE: Right.

RICHARD: And he’s right about that. And I’m not sure there are many people who disagree with him about that. People who don’t know the thesis…

LUKE: Yeah.

RICHARD: But there are people who have read it, those who are familiar with it. I think most of them that I’ve encountered agree that he’s right about that.

LUKE: Yeah. Well, what would the argument be, then, for not applying Bayes’ theorem to reasoning about facts of history? Like, what’s the counter argument?

RICHARD: Right, what’s the blowback?

LUKE: [laughs]

RICHARD: No, there are a number of them actually, and they sound very plausible at first.

LUKE: Let’s hear them..

RICHARD: I can’t give you them all, you don’t have time.

LUKE: Sure.

RICHARD: That’s what the book is for, it addresses tons of them, and I’m sure your audience is listening now like: “Ah, what about this? No, it can’t work because…”

Well, what are the big ones? Let’s take one that even scientists debate even among scientists in applying Bayes’ theorem: is the problem of subjective probability estimates.

Obviously, I mean obviously, in history especially – but even in science this is often the case – we don’t have hard, scientifically verified statistical data. We don’t, we can’t poll – we can’t take a scientific phone poll of ancient Roman populations, right? Things like that, you don’t have that kind of data.

There are few cases where you do, very few, and it’s very limited what we can learn from them. Most history doesn’t have access to that data. So you have to give a sort of subjective probability estimate, because people would say “you’re just making shit up, or you’re guessing, or something”.

What I point out in the book and demonstrate in detail, is that: this is how we reason all the time anyway, so if this is a valid objection to Bayes’ theorem, it’s a valid objection to all of human reasoning.

LUKE: [laughs]

RICHARD: And what I point out also is that: Yeah, you might not know the exact probability. You might not know that 87% of Romans thought X or Y or whatever it is.

LUKE: Right.

RICHARD: But you’ll know that it’s higher or lower than a certain number.

LUKE: Right.

RICHARD: You can say something like: You know for a fact that half of all the bodies in the ancient world were not robbed from their graves, you know? You know, whatever the probability is – and it is a non-zero probability – it wasn’t 50%, it’s less than that.

LUKE: Right.

RICHARD: And so a lot of the things where you can say things like that, where you might not know the exact probability, but you know where the limit is, and you can go a little further beyond that limit and be really sure; and I show how this is actually statistically valid. It’s actually logically valid and mathematically valid reasoning, based on the statistical study – the statistical mathematics – of confidence levels and error margins.

LUKE: Yeah.

RICHARD: And when you do that – you make the error margin as wide as you can reasonably believe it to be – the conclusion is what you must reasonably conclude it to be.

LUKE: Right.

RICHARD: And once your estimate is: you can say “it’s unreasonable to say the percentage is lower than this, therefore it’s” – and this follows from direct deductive logic – “it’s unreasonable to say the conclusion’s probability is less than this.”

LUKE: Yeah.

RICHARD: And that can be a very useful argument. In fact it’s really the only proper, sound way to argue about historical claims. Of course the religious will be very annoyed by this because they don’t like probability. They don’t like the idea that you’re not certain about what happened, you can only say “It’s a probability to a certain level”. But that’s their problem.

LUKE: The uncertainty is built into the math, and the math is built to work with uncertainty.

RICHARD: Yeah, absolutely. The probability is ambiguous, yes. But so is the reasoning even without the math, but with the math we can model the ambiguity, and model exactly where the limits of your knowledge are.

LUKE: Yeah.

RICHARD: So that’s how the response comes out, but that’s the typical objection is like: “Well, how do you get – “. That’s the one I always hear. That’s like the first question any time I give a talk on this.

LUKE: “How do you know what numbers to plug in to Bayes’ theorem?”

RICHARD: That’s right, that’s right, yeah, exactly. And usually once I start talking about examples they realize it’s a lot easier than they think and in fact it’s how they’ve already been reasoning, they just haven’t been using numbers, they’ve been using vague terms like “very probable.” Once you say, once you say that phrase, “very probable,” you’re talking mathematics; you can’t claim not to be. Anytime you say something is more probable than another, I mean anytime you say more than another, that’s math, you’re doing math.

LUKE: Yeah.

RICHARD: And so what I’m saying is, stop pretending you’re not doing math ’cause to get your conclusion you’ve got to be valid, you’ve got to have a valid argument. So let’s acknowledge that you’re using math, let’s acknowledge that you’re doing it and do it right.

LUKE: Yep.

RICHARD: Basically.

LUKE: And do it rigorously.

RICHARD: Yeah, so that’s the correct definition of what I’m talking about. And for that it’s like, in history we don’t need exact probabilities of anything, even the subjective probabilities are really objective because if you’re going to make an argument to another historian and say, “You should agree with me on this,” even if you don’t do it based on Bayes Theorem, you’re still saying “you should agree that these things are likely.”

If you do it with Bayes Theorem it’s exactly the same, “Well you should agree that the probability is at least sixty seven percent,” and if you agree with that, you have to agree with the conclusion, ’cause it follows necessarily by deductive logic.

And that’s the way I think historians should be arguing amongst each other. And a lot of disagreements could be resolved, even to the point where both sides agree we can’t really know what the answer is.

Like they might have both been confidently on both sides of an issue and then realize that “We really don’t know” or one or the other side will realize that the other is right. And that’s the only way progress can be made in the field and that’s how I propose questions in Jesus studies need to be resolved as well.

LUKE: Well, the Jaynes book in 2001 was a real landmark in the development of Bayesian theory and that’s very recent. Are there other historians or philosophers of history who have begun to do history with Bayes’ Theorem or is this really… it’s been difficult for you to draw on anybody else because there hasn’t been anybody else?

RICHARD: Yeah, well there are two answers to that, I guess there are two separate things. One is archaeologists have been making this argument already, they were there before me and so there are actually a couple of textbooks for archaeologists to apply Bayes’ Theorem to what they do.

LUKE: Wow.

RICHARD: And, archaeologists basically are historians, they just work with artifacts instead of texts.

LUKE: Yes.

RICHARD: But texts are artifacts and so it’s not a big leap to… they have particular mission objectives that might not be perfectly analogous but the underlying logic of it is the same. So the actual techniques they teach won’t be useful to a historian who isn’t doing archaeology but the justification for doing it is. So archaeology, there’s been some done there.

The other point is McCullough, C. Behan McCullough, wrote a book, “Justifying Historical Descriptions.”

And he’s written many other books defending historical method against post modernism. Really good book. “Justifying Historical Descriptions” talks about all the historical methods, all the epistemology of history that’s popular.

And so like, how do you justify historical descriptions? The title of the book.

LUKE: Yeah.

RICHARD: And he goes through different types of methods and shows the merits and demerits of each. And in the process of this he briefly discusses Bayes’ Theorem and says, “This has been proposed by a few people,” and he lists them and talks about them. But he says, “Well, it looks great, but it doesn’t really work because of this, that and the other thing.” And he has like three or four pages on this.

And then he moves on and settles on what he thinks is the most defensible which is the argument to the best explanation.

What I show in my book and I sent this to him and he said, “I’m very impressed that you did this because you’re totally right about it,” is that his preferred method – and it is the best method that he describes in the book and probably the best method that I’ve seen any historian describe: the argument to the best explanation is this particular criteria-based argument – is completely reducible to Bayes’ Theorem; in fact it’s only valid when it’s Bayesian.

And I show this reduction. And C. Behan McCullough wrote back and said, “Yeah, that’s very impressive. I’m very glad that you did that because it’s interesting to see…”

LUKE: Excellent.

RICHARD: “…that particular analysis.” And he was overjoyed to see that it could be reduced that way. He was still a little hesitant to adopt Bayes’ Theorem because of certain common questions – the subjective probabilities estimate thing and various others – but questions I’ve already answered in the book.

Anyway, that’s the background that I’ve been working with. So it is… a few historians have toyed with the idea, archaeologists are arguing for it but I’m the first one to really make a thorough, rigorous defense of the use of Bayes’ Theorem in history. No one else has done quite what I’m doing.

LUKE: That’s great. Now the approach to history that McCullough advocates toward the end of “Justifying Historical Descriptions,” is that… like an explanatory virtues account, argument to the best explanation… where whichever explanation has these virtues is most likely?

RICHARD: Yeah, which runs into the threshold problem like how much of the virtues does it need before it’s believable?

LUKE: Yes.

RICHARD: But yeah, it’s five criteria. The measure of explanatory power, explanatory scope, ad hoc-ness… I can’t remember them all off the top of my head.

LUKE: Yeah.

RICHARD: That’s three of them to give you an example, three of the five criteria. And these are all things, or statements about the evidence and the theory, like their degree of fitness between them, that give a theory, the more it meets these criteria the more likely it is to be true.

LUKE: Right.

RICHARD: And that I think, that’s a logical, valid statement because all the criteria do logically entail that, but the more it meets that the more likely it is to be true. The question is, more likely again is not likely… you need to go to a point like when is more likely, when is more probably equal probably? And his presentation of the theory doesn’t even discuss this problem. So it doesn’t even acknowledge the problem exists.

And the reason he did it is because he said, “Well this is a case where one theory if it clearly exceeds all other theories on these criteria, then it’s probably true.” Which is valid.

Because… to a point that’s a correct explanation because if something satisfies a lot of those criteria really well and so much further than other theories, then it’s unlikely there’s any other theory that’s going to turn out to be true than the one you’re defending.

But that’s quite not enough and not all problems are that simple. It’s usually not like there’s not usually one theory that’s so far and away better than all the others that it’s obvious that it’s right.

There are a lot of times where it’s very, very plausible competing accounts of something that are so close and might be close on different criteria which is really problematic. Like, one might be really good on criteria one and two while another one is really good on criteria three and four.

LUKE: Yeah.

RICHARD: Well now what? So Bayes’ Theorem is the way out of that.

LUKE: Yeah. Now I was speaking a little while ago with Lydia McGrew who is a Christian apologist and wrote an article on the Resurrection and defended the Resurrection by using Bayes’ Theorem. I was speaking with her and asked about inference to the best explanation, or argument to the best explanation, and she seems to agree with you that argument to the best explanation can be sort of an intuitive, semi non-mathematical way to get towards the truth but it’s got to come back to Bayes’ Theorem.

RICHARD: Yeah.

LUKE: And I agree with her even though I do talk a lot about argument to the best explanation but a lot of that is because nobody would read my blog if I just gave Bayes’ Theorem in every post.

RICHARD: Oh sure, yeah.

LUKE: But I do agree that it has to come back to Bayes’ Theorem.

RICHARD: It’s still a useful rule of thumb.

LUKE: Yeah.

RICHARD: It’s still useful to learn it, it’s just, it’s not enough really.

LUKE: Yeah.

RICHARD: It’s better to know that than to know neither, frankly, so…

LUKE: Yeah. But then, so she’s using a Bayesian method at least to try to argue for the Resurrection of Jesus and you very much disagree with that conclusion.

RICHARD: Yeah.

LUKE: So where is the disagreement then if you somewhat agree on the method but you’re coming to very different conclusions?

RICHARD: Well, I’m curious to know what she actually claimed on the interview, because the article in the Companion to Natural Theology does not come to any conclusion. One of the conspicuously missing things is the prior probability, one of the key premises of the entire argument. All they talk about are what we call two of the four premises of Bayes’ Theorem and they make an argument from those two premises.

But you can’t reach a conclusion without answering the other two premises and they never do, which I find disturbing because it suggests… and they don’t really explain this very well, I mean they kind of hint at it.

But considering the fact that the kinds of people who are going to be reading this Companion, a lot of them are not going to be sophisticated enough to know that they’ve been hoodwinked like that… in fact, the mere fact that you and other people, I’ve met other people who say, “Yeah, she argues for the Resurrection,” say, that’s not what she even argues at all.

LUKE: Right.

RICHARD: Was she deliberately misleading about that? And that’s kind of the problem I have with it. All they argue is that certain evidence makes the resurrection more probable.

LUKE: Right.

RICHARD: Again, that’s completely useless information. [laughs] We could go from 1% to 10%, that’s 10 times more probable. Yeah, that makes it more probable. It’s still 90% chance it’s false.

LUKE: Yeah.

RICHARD: So, it’s a useless argument. Why would they publish in a companion to natural theology an incomplete argument that doesn’t even argue for the resurrection? What’s the point of that? And not even to explain in a closing paragraph as you would in a science paper, for example. If you did this in a science journal, believe me, the peer review would mandate that you have this closing paragraph explaining that you haven’t actually proved your conclusion you have just done one step of two essentially to do that. So that’s one problem.

Another problem is that her facts are all wrong. Of course, that’s a common problem. That doesn’t indict Bayes’ Theorem. Obviously you put the wrong facts into the theorem, you’re going to get the wrong conclusion. That’s a straight forward problem of all reasoning methods. You don’t attack a theorem, you attack the facts.

LUKE: Yeah.

RICHARD: And there are a lot of things in there where their facts are just plain wrong. In my chapter on the resurrection, it’s called ‘Why the Resurrection is unbelievable’ in the Christian Delusion edited by John Loftus. I wrote that… I didn’t want to bother citing the McGrews’ article. It’s such a crappy article.

LUKE: [laughs]

RICHARD: It’s a bad introduction of Bayes’ Theorem. It really is. Specifically because it doesn’t explain this.

LUKE: Yeah.

RICHARD: And it has all these fancy calculations and stuff that make it look very impressive. It seems to me like it is hoodwinking the public in a way. So anyway, but my chapter does address it. It’s like I specifically crafted the chapter. In fact, I even put Bayesian arguments… I have it all in colloquial English but then in the end notes I have a Bayesian version of what I said.

I talk about all the facts they do, and I hit many of the same points. Unlike them, I talk about the problem of prior probability and get that nailed down.

So, if you want to see a reply to that, read that chapter. It’s not like a “McGrew said this and this is wrong because…”

LUKE: Right, right.

RICHARD: But once you’ve read that chapter you’ll understand what is wrong with their chapter.

LUKE: Right. I think it is something a little slippery going on. Actually in that same volume of the Blackwell Companion to Natural Theology, the Robin Collins chapter in particular in the fine tuning argument…

RICHARD: Yeah.

LUKE: … is a very similar kind of thing. He is using Bayes’ Theorem to argue in defense of the fine tuning argument for the existence of a supernatural designer but he doesn’t put in priors. The conclusion is just that it’s more probable looking at this evidence than before which doesn’t give you a conclusion at all.

RICHARD: You know, I’m not sure that he commits that error necessarily. I do think he talks about relative degree of priors.

LUKE: Yeah. And I’m wondering if that’s the way the McGrews maybe were intending to argue.

RICHARD: Well, it can’t be for them because prior probability is so fundamental to…. The prior probability of a miracle is the most fundamental premise that has to be established for their theory. It’s just absolutely essential there. In the case of the fine tuning argument, you could make the argument, and this is the argument that Collins makes, of what’s called the consequent probability. The probability of the evidence given the different hypothesis is so huge…

The difference between them is so vast that it really doesn’t even matter what the prior probability would be. There is no way the prior probability that there is no God is like 99.999999 %.

He says it comes to some… I don’t know if he’s the one who comes to it, but I’m sure he would agree, that the odds of the universe existing if it wasn’t designed are like one in 10 to the 10 millionth power.

LUKE: Right.

RICHARD: If you know anything about math, that’s a fucking huge number. [laughter]

RICHARD: So, what are the odds that that’s the prior probability that God exists. I wouldn’t even agree that the probability that God exists is one in 10 to the 10 millionth power. I don’t think it’s that low. At least, if we were to argue it’s that low, you really need make a good argument and show me that it’s that low. Unlike the McGrews, he’s assuming that the difference in evidential probabilities is so vast that the prior probability, you don’t need to nail that down to make his argument. The problem is that he misuses what is called background evidence. This is one of the common ways to misuse Bayes’ Theorem. You can’t ignore evidence. That’s one of the key things.

Evidence either has to go in the evidence box or in the background evidence box. Background evidence here is absolutely key. He knows this because he even admits it.

There is a point in there where you are reading through this dense, long, vast article and you’d miss it if you didn’t know what you were looking for.

Briefly he mentions, yeah, if you grant this particular premise that these other authors have written about who also applied Bayes’ theorem and get the exact opposite answer that he does on the fine tuning… When I mean exact opposite, they prove that fine tuning disproves the existence of God.

LUKE: Right.

RICHARD: When in fact fine tuning makes the existence of God much less likely than what else we could have observed.

LUKE: Who is that?

RICHARD: Sober and Akeida and Jeffries. Akeida and Jeffries have an article on that, and Sober did one before them. They both make the point that the probability of the evidence existing…. Let’s say for example, this is the premise. This is Collins’ premise. He agrees with this. If there is no God, then the only way we could be here to observe anything is if fine tuning occurred. It doesn’t matter how improbable that is, there is no other way that we could be here.

According to Bayes’ theorem, the probability we would observe a fine tuned universe on the hypothesis that there is no God, is 100%.

LUKE: Yeah.

RICHARD: It’s not one in 10 to the 10 millionth power. It’s 100%. Which completely destroys his entire argument. He acknowledges this in one sentence. “Yes if, you include in the background evidence the fact that we exist as observers then the probability is 100% and my argument is wrong”.

LUKE: [laughs]

RICHARD: But, and he does some legerdemain… and says, “I’m going to move the observers exist out of B, out of background evidence, and put it in E for the evidence”.

LUKE: Yeah, yeah.

RICHARD: The problem is, you can’t do that. If you do that you still have the background evidence cogito ergo sum, right? So I’m going to move that out of there and put… You’re assuming. The problem is his whole argument is premised on the assumption that observers don’t exist.

LUKE: Yeah.

RICHARD: It’s like you were say, any conclusion that is based on the assumption that observers don’t exist is of no interest to observers, right? [laughs] His conclusion is valid if there was some sort of God being sitting outside a huge conglomerate of universes and only life is in some of them. It would work for that being, but it’s not going to work for the people who are in those universes.

LUKE: Yeah.

RICHARD: So it just doesn’t make any sense. That’s the key problem with his argument. And it’s shocking that he makes that argument because he ought to know better. In fact, that one sentence snuck in there shows he knows better. So I don’t know what he thinks he’s up to.

LUKE: So, a lot of people are going to be fairly confused because most people aren’t familiar with Bayes’ theorem, and I’m certainly not an expert in Bayes’ theorem.

RICHARD: Sure.

LUKE: It seems like such a powerful tool. How long would it take somebody to be just moderately proficient in Bayes’ theorem do you think?

RICHARD: Oh. I actually taught a brief seminar on it at the Amherst Conference. I gave a talk on what I’m talking about here, about applying Bayes’ Theorem to Jesus studies. I asked people if they wanted a little tutorial. There was like a little period of time where there was an hour free and anybody who wanted to come could come and I’d give you a basic tutorial.

I took about an hour I would think just from that experience in terms of how much they absorbed in that time, maybe five hours of lecture and participation would be enough to be reasonably competent, at least to the point where you understand well enough to identify when Bayes’ argument is being used correctly or incorrectly.

LUKE: Right.

RICHARD: And well enough to use basic forms of Bayes’ Theorem in any other situation. I’d guess about five hours of tutorial on that would be enough.

LUKE: And right now, what is going to be the best way for people to learn through the resources that they have available. If there isn’t a workshop in their area…

RICHARD: Yeah, to a large extent that’s one of the objectives of my book. I explain and articulate Bayes’ Theorem in a way that I hope laymen can understand. I’ve had some laymen read it and say that they understand it a lot better having read my book. I don’t know any defense of Bayes’ Theorem that is as extensive and as clear as mine that is aimed at non scientists and non specialists. I’ve seen a few attempts at it, but they are not as detailed and they leave too much misunderstood or under explained.

LUKE: Yeah.

RICHARD: But, yeah, that’s the point of my book amongst other things. It uses history as an example.

LUKE: Excellent. When is it coming out? [laughs]

RICHARD: It’s done. I’m shopping for a publisher.

LUKE: All right.

RICHARD: So it’s difficult to find publishers with slots, because I’m under contractual obligation with my donors who funded this project to publish with an academic press. The problem with academic presses is they only have a few slots for books. They don’t even read the manuscript. They just look at the prospectus. They might not even read the abstract. They’ll just read the title and say, “That doesn’t fit our profile” or that…

LUKE: Wow.

RICHARD: … We’ve got these other books we want to do more,” and they fill the slots. So I’ve already gotten some letters saying essentially that, that “We don’t have any slots for it.” That’s the difficult point, is to get to the point where they’ll even look at the manuscript. So I’ve been going around to different publishers. I think part of the problem I’ve had so far is that I’ve been submitting to religious studies editors and humanities and history editors. They see something that’s about mathematics and immediately assume it doesn’t fit their profile of the books, that they’re supposed to go in the slots.

So I think they’re dismissing it as not even like, “You should have talked to a different editor. This is the wrong thing.” They haven’t said that, but I suspect is the case because humanities people freak out when they see any reference to math.

LUKE: [laughs]

RICHARD: It’s still sitting at one publisher now. I’m waiting to hear. But if they also reject it or don’t even want to look at it, I’m going to start hitting up mathematics editors. So like MIT Press because I know MIT Press has done things like this where they take a mathematical scientific thing and apply it to a humanities subject. So they seem keen on it. The only reason normally you wouldn’t do that is that there’s probably no one at MIT Press who’s an expert in Jesus studies, right? But maybe they don’t need to be because they’re going to send it out to peer review anyway. So it finally occurred to me, “Oh, OK. I should be sending this to mathematics people.”

So that’s what I’ll try next if it continues that these humanities people who keep saying it doesn’t fit their slot profile. [laughs]

LUKE: I’ll just ask this. So when you apply this method of Bayes’ theorem to the question of the historical Jesus, I won’t particularly ask you to give the whole justification for everything. But for you when you do it, what’s the general picture of who this Jesus guy was? And what happened in the events of his life that comes together with some probability for you?

RICHARD: None, really. [laughter]

That’s the point. In my next book – the next volume will be on the historicity of Jesus Christ – I break the evidence down. First I have a section on background evidence, which is one important thing that oftentimes historians don’t mention and yet it’s a crucial part of your premise as a running argument. So I’ll have a chapter – actually two chapters – just covering background evidence that, in my experience, even experts in Jesus studies often don’t know these things. Yet they’ve been published by experts in Jesus studies. It’s like they don’t even know their own literature oftentimes.

LUKE: What’s some of the background evidence?

RICHARD: Well, some of the evidence – the whole connection between the Ananas cycle and “The Ascension of Isaiah,” for example. There was this big debate on Doherty’s website years ago about how similar they were or weren’t. For the book, I went back and revisited the evidence. It’s just so clear that there is a connection between them, and yet it’s very important because it’s basically a whole blueprint for a cosmic Jesus. It’s Doherty’s thesis right there in an ancient document in fact.

Now it doesn’t decisively prove his theorem, but it gives a key piece of evidence that makes his theory more likely.

It’s background evidence that makes the probabilities better than they would be without this evidence. It’s the kind of thing that most Jesus historians have never read “The Ascension of Isaiah,” and I don’t blame them because it’s a massive long document. It’s incredibly dull.

LUKE: [laughs]

RICHARD: And it is considered apocryphal. Who cares? It’s not canonical, right?

LUKE: Right.

RICHARD: There’s a lot of that that kind of thinking.

LUKE: [laughs] Right. It’s not canonical. [laughs]

RICHARD: Yeah. Well, I know.

LUKE: [laughs]

RICHARD: Even secular historians will give that “Oh, it’s not canonical, so it can’t be relevant.”

LUKE: [laughs] Oh, my God!

RICHARD: [laughs] Then you spend like one minute answering that question. They go, “Well, OK. Yeah, you’re right. [laughter]

RICHARD: But nonetheless, it’s like they’re trying to make their lives easier by not reading all these incredibly boring, tedious things.

LUKE: [laughs]

RICHARD: But one of the examples off topic and that is Origin. Origin has these commentaries on various gospels. They’re fragmentary. We don’t have the whole ones, and we don’t have all of them, which I find very suspicious. But anyway, these are the most mind-numbing, dull commentaries where you’re reading them and this guy was freaking insane, and yet he was one of the greatest Christian intellectuals of the first two centuries of the Christian [laughs] era. Anyway, apart from that aside, but there are other examples.

One is Doherty’s been criticized for his “Demons of the Air” thesis, this idea that everything on earth has a copy in heaven. In fact, it’s not even in heaven. There’s a copy in outer space between the earth and the moon that’s actually up in the air, right?

One thing they don’t understand is ancient cosmology. How they understood the world is very different from how we do.

There’s no vacuum in outer space. There were some philosophers who kept insisting there was, and of course they turned out to be right. But most people had arguments against it. The common man and most religious people didn’t buy into the vacuum of space thing.

To them, the air, the atmosphere, extended all the way to the moon, which then they did know was like 200, 000 miles away. The moon’s distance was accurately known then.

It was popularly known that it was at least tens of thousands if not hundreds of thousands of miles away. So they knew that there’s this vast realm of air. So all kinds of shit could be going on there that you wouldn’t see because it’s just so huge.

This air goes to the moon, and then past the moon is another kind of air called “ether” that another kind of animal breathes. Then that whole area is inhabited by angels and beings and stuff. This idea that there are the demons, which “demon” means divinity. It’s just the Greek word for divinity. So when Greeks talk about demons, they just mean gods.

So the Christians just gave it the pejorative sense, that they started “Demons!” and became evil. But that was a Jewish/Christian spin on a pagan idea. The pagans didn’t consider demons necessarily evil. There might be evil demons, but there are also good ones.

So when you hear anything in the Bible where Christian writers are talking about demons, in that context you need to put the word “god” in there, “gods.”

So like, “People are possessed by demons.” No, they’re possessed by gods. And these are pagan gods, the exact same gods that the pagans are worshipping, praying to, burning incense to, and all that stuff, which opens your mind to this whole dialogue was very different back then than you may have thought before.

But there’s this idea that everything on earth has a copy in heaven. So there are trees up in outer space or soil and so on. There’s buildings and all kinds of things.

So Doherty’s idea that it’s completely conceivable that when they say Jesus was crucified and buried, they mean he was crucified and buried in heaven in this outer space area, crucified by demons and buried up there and resurrected up there.

Of course, if that were the case, then he’s not historical because the odds of that actually having happened are virtually nil.

And we know well that if someone were claiming that – like they said – if we knew that was the first preaching… there’s this Jesus who came down to the lower reaches of outer space and was crucified by demons 10, 000 miles up above the [laughs] earth and then buried and resurrected up there, we wouldn’t know that that’s a total myth.

They’re not talking about a historical Jesus. They’re talking about some vision, some god that they had a vision of.

Now setting aside the question of whether that’s what happened in Christianity, Doherty’s been criticized for even the underlying background fact of there being demons of the air and for this to even be plausible. I know a lot of Jesus historians who know nothing about it. Some of them profess not to know anything about it.

Some of them will insist that that’s absurd, that that’s not the belief, knowing nothing whatsoever about it. So one of the things I do in my book is I provide extensive documentation – not just from primary evidence but also other actual scholars – demonstrating that, yes, this is a widespread view. It’s clearly a fundamental view in early Christianity.

So that in itself doesn’t prove that Jesus was crucified and buried in heaven, but it does mean now that it’s at least on the table as a possible theory. Now it’s a question of testing one theory against the other.

LUKE: Right.

RICHARD: So that’s an example of the background evidence. Then for the remaining chapters, I take the Bible and I do agree that I think there’s no apocryphal literature that we can confidently date earlier than the text of the New Testament. Not that there weren’t texts before then or some of them might have predated, but we can’t establish that. So I do rule out most apocryphal stuff as being too late. I do have one chapter on extra-Biblical evidence where I cover it all in general. But really when you look at extra-biblical evidence, and even Jesus scholars will admit, the evidence is pretty shitty when it comes down to it.

You have things like, even the Josephus reference and the testimony of Testimonium Flavianum, which everybody agrees has been forged or tampered with.

LUKE: Yeah.

RICHARD: Even if that were completely 100 per cent authentic, it’s still 96 AD. You can’t establish independence from the Gospels. Like maybe he just read a Gospel and said, “Oh wow, I’d better write about this.” He gives no indication there that he knew any other information or had any other source of information. So it’s useless information as far as history goes. If someone makes something up and then someone else copies it, that doesn’t make it more likely to be true.

LUKE: [chuckles]

RICHARD: So the external evidence is crap. Most Jesus historians will… They won’t put it that way. But you press them on it, you’ll get essentially that answer from them. They’ll try to make it sound nicer than that. But nonetheless… So that leaves the New Testament. I break it down into one chapter on the Epistles, then the Gospels, then the book of Acts. Because each one is different. The kinds of evidence are treated differently in each one.

I show that the evidence in each one, all taken together, all together, is more probable on the theory that Jesus didn’t exist than that he did. So that gives you your consequent probabilities in a Bayesian argument.

Then I have one chapter before all this establishing prior probability. I show certain characteristics of the Jesus story – even from very early on – are more typically characteristics of mythical people than historical ones. So the prior probability already favors his non-existence. I give a rigorous demonstration of that. I don’t just presume that. So I have a chapter on that.

So if the prior probability favors myth, even by a little bit. It doesn’t matter how much, even by a little bit, and all the consequent probabilities favor myth. Then by necessary deductive logic, myth is more probable than historicity.

LUKE: Yeah.

RICHARD: That’s the structure of the argument. Then, of course, the debates over method hopefully will already have begun with the first volume and hopefully will be resolved eventually before the second volume or by the second volume. That leaves the debates over the facts and the estimates of probability. So then I expect debate and that’s fine. But we need to have those debates. That’s the point of my project. It’s doing no one any service to not have these debates to just dismiss the argument as not even worth considering.

LUKE: Yeah.

RICHARD: Which is done even by professional historians. I even explain in the first book that I get it. I understand why. I have one in my examples as I have this problem with pyramidiots. Ever since I wrote this article in Skeptical Inquirer, ages and ages ago about Fox News’ really shitty, deceitful program, “news program” about aliens in the pyramids. Ever since I wrote that.

I just wrote it as a journalist. I called the people who were interviewed and did analysis of the show and quoted the people what they said about it and things.

I did research the people that were on and read their writings. Just reported as a journalist, not as an Egyptologist. Nevertheless, because I did that… I don’t know why these kooks are reading Skeptical Inquirer. But anyway, I guess because they’re pursuing aliens in the pyramids.

I get kooks who are have some weird, fucked up, bizarre crack pot theory about the pyramids and want my opinion on it. “Please, won’t you read my 1,000 page manuscript?”

LUKE: [laughing]

RICHARD: Or “This completely refutes you.” But it’s usually not even that. It’s usually just, “I have really good evidence here. You really should look at this. It’s much better than the Fox thing.”

LUKE: [laughing] Yeah.

RICHARD: So I get that a lot. There’s no way. I don’t even have time to read these things. It’s not even worth my time to look at them. I don’t even bother. So I tell them, “You get even one tiny piece of this published in a peer review journal and then come back and then talk to me.” Or an academic press. It could be a book and academic press. That’s completely legitimate and correct behavior. That’s how historians should behave. That’s the problem with criticism that I’ve made before about pro-myth community: that they’re outside of academia.

They act like outsiders and mavericks and accuse historians of all these awful things. Then defend these theories in a fairly sloppy, often inaccurate way.

So that when a historian comes along and picks up one of these books and looks through it, he can tell immediately this is inadequate. That it’s so sub-par that it’s not worth his time to continue reading.

LUKE: Yeah.

RICHARD: The fallacy is, of course, assuming that if it’s badly argued, therefore it’s false. But what else can they do? They don’t know that. So in a way, a lot of this amateur myth defense stuff is hurting their own… They’re shooting themselves in the foot, in a sense.

They’re essentially guaranteeing that historians aren’t going to take you seriously. In fact, you’re making it worse. Because the historians will read their stuff, not take it seriously, and then not listen to any other stuff.

Like Doherty. I think Doherty’s stuff is pretty good. It’s just short of Ph.D. quality. Not quite Ph.D. quality, but it’s up there. Certainly I know PhDs in history who have written books that are worse than his: methodologically and factually. So, even at worst, he’s in good company, right?

But people won’t even read his book because they’ve read some other book – I won’t name names – that they see as complete crap. So they assume it all is.

LUKE: Yeah.

RICHARD: The same thing has happened with Wells. Wells wrote many books on this subject. But he’s not an expert in ancient history. So he even got things wrong. That weren’t even relevant to the debate that he was getting wrong. He just wasn’t making the best case for it. He was making the wrong case for it, in many ways. So, when you read something like Robert Van Vorst’s “Evidence for Jesus”. It’s like “Evidence for Jesus outside the New Testament” or something like that. I can’t remember the exact title. He has a few pages on the myth claim.

It’s mostly focused on Wells and a couple of mythers from the 1930′s or something. I think that’s like… 1930′s is so freaking obsolete that it’s… Why are you even paying attention to that?

LUKE: Yeah.

RICHARD: There was this whole debate back in the ’20′s and ’30′s … It may have started even earlier than that. Where there are a lot of academics coming out arguing the myth theory. They were right about a lot of things. But they were wrong about a lot of things. So the historians – the other historians – looked at it, then showed they’re wrong about the thing they’re wrong about and said, “Oh, so it’s crap. We can dismiss it.” Then didn’t even really pay attention the things they were saying right.

So there was this assumption that was built up and passed on and on and on. So the historians today assume that “Oh, that was refuted 80 years ago.” No… Well, it wasn’t technically even refuted then. A lot of problems with it were exposed, but that doesn’t constitute a complete refutation.

But that stuff isn’t even the myth theory that’s being defended today. So it’s not even the same theory. So it’s a double fallacy. Van Vorst does that where he gives “these are the reasons why.” I analyze that in my book. Showing that the reasons he gives aren’t even strong reasons – even as they are – much less applicable to current myth theory.

That’s the problem I run into. But I keep saying this because I think they’re not intrinsically wrong to be doing this. Because I think someone in the historical community was and has been obligated to do what I’m doing.

LUKE: Yeah.

RICHARD: Which is sweep away all the bullshit and error and stuff and get something rigorous and factually correct. Then start the argument from there.

LUKE: Yeah.

RICHARD: And update it, modernize it. So, no one else is doing it. So my fans got together to pay me to do it. And I’m happy to do it. So, that’s where I am.

LUKE: You’re saying it’s hard to blame historians for not taking the Jesus myth theory correctly when all they’ve had to read are poorly argued Jesus myth theories. Not in peer review literature. Not from academic presses.

RICHARD: Yeah, you’re totally right. Of course, it’s one of those examples where you can’t really have a peer reviewed article on the subject. Because anything you say on it is going to open 10 million questions. So any peer reviewer will say, “Well, you don’t address this, this, and the other.” You’ll point out, “Well, there’s no way I could do that in under 10,000 words.”

LUKE: It’s hard to get started.

RICHARD: Yeah, yeah. There’s this sort of contradiction in the publication methodology where no one will publish an article longer than 10, 000 words. Yet, some of these theories can’t be argued in less than 10, 000 words. There are a few exceptions. A few journals who will publish long things. But usually you have to be a prestigious scholar already. It’s usually in Europe, too, where these things appear. So that’s the conundrum for that thing.

But there is a solution… This has been recognized, not just on this subject, but on many subjects. The solution is the academic monograph. So there are academic presses that will publish a book. That is exactly the purpose of it is: this is too long to be a peer-reviewed article. So let’s publish it as a book.

LUKE: Yeah.

RICHARD: So that’s what I’m pursuing to do. Hopefully that will make it. All these academic presses… Any one I get into, they will have a peer review process, just like for an article. So it is the same as writing and publishing a really long article. That’s what’s going to happen at this point.

LUKE: Well, I can’t wait. Richard, thanks very much for your time.

RICHARD: Sure. Glad to be here.

Previous post:

Next post:

{ 92 comments… read them below or add one }

Eric January 2, 2011 at 11:14 pm

Wow! Its exciting to finally see a legitimate mythicist peer-reviewed publication! I especially like how Carrier commented on other mythcists and basically defended academia and peer-reviewed publications. Hopefully this will help dispel the idea that the mythicist hypothesis is the “creationism” of historical Jesus studies.

  (Quote)

AlbertA January 3, 2011 at 12:33 am

Looking forward to listening to this but Carrier really needs to grow a beard or something. Guy looks 11 years old in that photo :)

  (Quote)

Patrick January 3, 2011 at 4:55 am

The prior probability of the miracle accounts in the New Testament can be raised by well-documented miracle accounts from more recent times. Such accounts can be found in the following biography of the Lutheran theologian and pastor Johann Christoph Blumhardt (http://en.wikipedia.org/wiki/Johann_Blumhardt):

Dieter Ising, Johann Christoph Blumhardt: Life and Work: A New Biography, Translated by Monty Ledford, Eugene 2009.

  (Quote)

stevenJ January 3, 2011 at 10:49 am

I have problems with Carrier’s attempts to “disprove” the argument from embarrassment by citing the example of the self-mutilation rites of the cult of Attis as somehow of a similar ilk to the Gospels claim that women were the first to discover the empty tomb of Jesus. Ramming these two things together is irrelevant, arbitrary and malevolent, an example of malevolent equivalence in fact. Hopefully his book provides more believable objections than this.

1. The cult of Attis was a Near Eastern belief that spread westwards into Italy and Rome. Like other such cults, it had a mystical and orgiastic basis, celebrants believing that through ritual self-mutilation they would attain union with the god and immortality beyond the grave. That traditional Romans would find such beliefs outrageous and embarrassing is irrelevant, because those beliefs arose in the context of a culture alien to their own. No-one “invented” them (as if there was some committee of priests who met and came up with their beliefs as a deliberate policy to affront the Romans), they developed over a period of hundreds of years, a process that has not as yet been fully deciphered by rational inquiry (unless one ascribes to Carrier’s caricature of the development by cabal of human religiosity).

2. Those beliefs are the central tenets to the religion, just as the resurrection of Jesus is the central tenet to Christianity. This is where any real comparison should be made, if any; does participating in ritual castration place one in mystical union with Attis and attainment of immortality vs did Jesus rise from the dead, defeating such and all the other sins of humanity?

3. The account of the women discovering the empty tomb in the Gospels is not at all of the same order, a central strategic belief to Christianity, it is just a surrounding , structural factual claim driving the narrative and giving it verisimilitude. It is just interesting that, in the context of all the miraculous claims of the Gospels, why wouldn’t the writers (if they really were making the lot up) have endeavored to provide as much robust “real” facts to enwrap their story in so as to make it as believable as possible to the curious? The testimony of men would have been a more believable way of making the story more acceptable.

4. Of course, according to Carrier, what happened is that the Central Committee of the Early Christian Party, meeting in secret, devised the Gospel narratives down to their most exact details, one of which was placing women as discovers of the empty tomb, because:

(dramatic recreation of supposed secret discussions of the Committee)

“Now, as to how His tomb should be found empty, I suppose we should provide some followers accompanied by curious non-believers as the most credible way of portraying the matter”

“No, no, I have a better idea. We know that in society women are considered second class citizens, everyone regards their rational faculties as small and unreliable, anything that they testify to is regarded with skepticism and doubt. Let’s make a number of them as discovers of the empty tomb. This will operate as a deep counter-counter tactic, for, just think: our enemies will jump on it and claim that how can anyone reasonable (ie men) take the word of a group of superstitious, emotional, scatterbrained women? The whole thing is discredited just by that fact. But it will operate at a deeper, more cunning level, because it will instill a gnawing doubt that if we were really making the whole thing up (as we are), why wouldn’t we claim that men discovered the empty tomb to make the story more credible? So therefore (the fools will reason), the story just might be true!

“Oh yes, excellent suggestion. What does everybody think….etc etc”

  (Quote)

Eric January 3, 2011 at 12:27 pm

One thing that confuses me about the whole Empty Tomb story is that burials seem to have been rare in crucifixions:
“The goal of Roman crucifixion was not just to kill the criminal, but also to mutilate and dishonour the body of the condemned. In ancient tradition, an honourable death required burial; leaving a body on the cross, so as to mutilate it and prevent its burial, was a grave dishonour.”
” The Romans often broke the prisoner’s legs to hasten death and usually forbade burial.”
Wikipedia article on Crucifixion
With some accounts of Pontias Pilate crucifying up to 100 people per day, it seems more and more unlikely there ever actually was a burial of Jesus with a known tomb. Any thoughts?

  (Quote)

Yair January 3, 2011 at 1:23 pm

Well, what are the big ones? Let’s take one that even scientists debate even among scientists in applying Bayes’ theorem: is the problem of subjective probability estimates.

Luke – I hope you don’t consider this pittance one of the “big ones”? The “big one” is the attribution of probabilities to beliefs; this is what most major objections come down to, to the idea that you can’t really assign probabilities. Because the structure of theory space degrees of belief is not well-ordered, because you can’t treat ignorance properly, because you can’t even define a probability distribution/density that says things like “It could equally be any number, but between 0 and 1″, and so on. And then there are the more technical but still important objections, like the fact that once probabalistic-evidence is considered the order in which evidence is considered becomes problematic. And so on. The problem of the priors is still an important one, but not because it implies subjectivism; it is big because it implies there is more to rationality than Bayesian inference.

I found the following illuminating, and the above is largely drawn from it:
http://www.pitt.edu/~jdnorton/papers/Challenges.pdf
(I don’t agree with everything Norton says, but I do think that he raises excellent points.)

I’m not sure that these problems are particularly relevant for history, and indeed they may be largely irrelevant to many cases. But I certainly consider Bayesianism a partial truth at best, and the above problems do seem to me to indicate that its validity is contextual.

Other than that, I find the above tantalizing yet with hints of disappointment. Anyone that says that under atheism the probability of fine tuning is 100%, for example, has stepped outside of the scientific definition of “fine tuning” and demonstrated a profound lack of understanding. Likewise, while it is true that the Jesus story as a whole is likely a myth (i.e. not true, with large parts drawn from other mythologies or theologies), that is not the offered alternative: the historicitsts argue that a real person was probably the source of the myths, not that no mythical elements were inserted.

  (Quote)

Patrick (not the Christian one above) January 3, 2011 at 3:29 pm

Yair wrote: “the historicitsts argue that a real person was probably the source of the myths, not that no mythical elements were inserted.”

As far as I can tell, the historicists have no idea what they believe or argue. Many have a theory of a historical Jesus, but few seem to have a theory of what it means, in general, for Jesus to have been historical.

For example, if there were two people who both provided the source of many of the myths surrounding Jesus, would that count as a historicist position or a mythicist one? I’d lean towards historicist, but they haven’t got their act together enough for me to say they’d agree.

  (Quote)

Haecceitas January 3, 2011 at 4:03 pm

“One thing that confuses me about the whole Empty Tomb story is that burials seem to have been rare in crucifixions”

As far as I know, they weren’t rare with Jews during times other than open rebellion. Non-burial would be especially offensive for Jews with their strict purity rules. (Mass crucifixion without burial was thus an effective deterrent during a time of rebellion.)

  (Quote)

Tony Hoffman January 3, 2011 at 5:58 pm

Thanks for the interview.

I think apologists should wise up to the fact that whoever they most reliably impugn is the source of the best material against them. If Carrier and Dawkins had a nickel for every gratuitous, ignorant dismissal, they’d be rivaling Buffet for what to do with all their atheist dollars.

  (Quote)

Eric January 3, 2011 at 7:16 pm

Haecceitas –
As far as I know, they weren’t rare with Jews during times other than open rebellion. Non-burial would be especially offensive for Jews with their strict purity rules. (Mass crucifixion without burial was thus an effective deterrent during a time of rebellion.)

Do you have a source for this? I know I used Wikipedia, but it was an early check. I was wondering what the story is on this because it seems to be extremely important in regards to the issue of the empty tomb. In fact, the article does mention that only one buried body has ever been found where cause of death was crucifixion.

  (Quote)

Robert Gressis January 3, 2011 at 8:40 pm

I might be misunderstanding what Carrier says, but when he says

“RICHARD: … But considering the fact that the kinds of people who are going to be reading this Companion, a lot of them are not going to be sophisticated enough to know that they’ve been hoodwinked like that… in fact, the mere fact that you and other people, I’ve met other people who say, ‘Yeah, she argues for the Resurrection,’ say, that’s not what she even argues at all.
“LUKE: Right.
“RICHARD: Was she deliberately misleading about that? And that’s kind of the problem I have with it.”

Notice that near the beginning of their article, they write:

“At the outset, we need to make it clear what argument we are making and how we propose to do it. The phrase “the argument from miracles” implies that this is an argument to some other conclusion, and that conclusion is most naturally understood to be theism (T), the existence of a God at least roughly similar to the one believed in by Jews and Christians.
It is, however, not our purpose to argue that the probability of T is high. Nor do we propose to argue that the probability of Christianity (C ) is high. Nor, despite the plural ‘miracles,’ do we propose to discuss more than one putative miracle. We intend to focus on a single claim for a miraculous event – the bodily resurrection of Jesus of Nazareth circa A.D. 33 (R). We shall argue that there is significant positive evidence for R, evidence that cannot be ignored and that must be taken into account in any evaluation of the total evidence for Christianity and for theism.

Also, when Carrier says:

“RICHARD: All they argue is that certain evidence makes the resurrection more probable.
“LUKE: Right.
“RICHARD: Again, that’s completely useless information. [laughs] We could go from 1% to 10%, that’s 10 times more probable. Yeah, that makes it more probable. It’s still 90% chance it’s false.
“LUKE: Yeah.
“RICHARD: So, it’s a useless argument. Why would they publish in a companion to natural theology an incomplete argument that doesn’t even argue for the resurrection?”

I find myself a bit puzzled, for elsewhere in the article the McGrews write:

“We shall argue that there is significant positive evidence for R, evidence that cannot be ignored and that must be taken into account in any evaluation of the total evidence for Christianity and for theism.”

So they’re not just saying that there is some evidence that raises the probability of R, though they are saying that. It appears to me that they’re arguing for this conclusion: bracketing philosophical considerations about theism and miracles, and bracketing certain issues of textual interpretation and archeology, and instead focusing on a range of relevant historical considerations, the probability that the resurrection happened is very high. I get this impression from this part of their article:

“Even as we focus on the resurrection of Jesus, our aim is limited. To show that the probability of R given all evidence relevant to it is high would require us to examine other evidence bearing on the existence of God, since such other evidence – both positive and negative – is indirectly relevant to the occurrence of the resurrection. Examining every piece of data relevant to R more directly – including, for example, the many issues in textual scholarship and archeology which we shall discuss only briefly – would require many volumes. Our intent, rather, is to examine a small set of salient public facts that strongly support R. The historical facts in question are, we believe, those most pertinent to the argument. Our aim is to show that this evidence, taken cumulatively, provides a strong argument of the sort Richard Swinburne calls “C-inductive” – that is, whether or not P(R) is greater than some specified value such as .5 or .9 given all evidence, this evidence itself heavily favors R over ~R.”

Personally, I think it’s ok for them to bracket this evidence. If they had to deal adequately with the case for and against miracles (which Tim McGrew has dealt with in the SEP), as well as the case for and against theism, as well as with all the textual interpretation and archeological considerations, then they would have had to have written a much longer article than their already very long article. Still, if they’re right in what they say, then I think what they’ve done is far from “useless”. If they’re right about what they say, then that’s very important, in my estimation, though I will admit that I know very little of the textual and archeological issues they don’t deal with in their article.

  (Quote)

Patrick January 3, 2011 at 9:03 pm

Roger- just from what you quoted, your interpretation is incorrect.

“We shall argue that there is significant positive evidence for R, evidence that cannot be ignored and that must be taken into account in any evaluation of the total evidence for Christianity and for theism.”

That’s what Richard Carrier was saying.

You appear to have fallen prey to the classic equivocation on the definition of “evidence” that seems to crop up whenever apologists start using Bayes. To normal people, and as far as I can tell to you given what you took from this quote, to say that something has “significant positive evidence” means that, as you put it, “the probability… is very high.” But that’s not what evidence means in Bayes. In Bayes, evidence just means a reason to raise the probability above what it would be absent the respective argument or piece of information. In Bayes, you can have “significant positive evidence” of a proposition that is completely false.

That’s what the final quote from McGrew means. She and Carrier agree.

  (Quote)

Steven Carr January 3, 2011 at 9:19 pm

CARRIER
I’m just following up on what other Scholars have done, demonstrating that the current methodology is bankrupt, it’s invalid. It’s this what they call “criteria of historicity that they’re using.

LUKE: You’re saying it’s hard to blame historians for not taking the Jesus myth theory correctly when all they’ve had to read are poorly argued Jesus myth theories.

CARR
If historians are good at spotting ‘poorly argued’ theories, why does peer-review allow so many articles through that use methods that are ‘bankrupt’ ‘invalid’, to quote Carrier?

CARRIER
The one chapter I have refuting all the historicity criteria is like the deconstructive part of the whole project, because once you see that their methods are wrong they don’t have any valid basis….

CARRIER
That’s the problem with criticism that I’ve made before about pro-myth community: that they’re outside of academia.

They act like outsiders and mavericks and accuse historians of all these awful things.

CARR
What sort of ‘awful things’? Pointing out that every single criterion used is wrong?

CARRIER
…the historians today assume that “Oh, that (the myth theory) was refuted 80 years ago.”

CARR
What qualifies somebody as an historian? Is it an ability to use these 31 or so criteria , all of which are logically invalid?

To choose a name, Bart Ehrman has a BA from Wheaton (an evangelical college) and ‘At Princeton I did both a master of divinity degree—training to be a minister—and, eventually, a Ph.D. in New Testament studies.’

Does training to be a minister, or a Ph.D in ‘New Testament Studies’ qualify you as a ‘professional historian’, in the way that studying the Illiad would qualify you as a professional historian?

Crossan is an expert on the criterion of double dissimilarity, criterion of embarrassment etc etc – all the criteria that Carrier shows are logically invalid and bankrupt.

What qualifies somebody as a ‘professional historian’, so that people like Bart Ehrman and JD Crossan are professional historians and Earl Doherty isn’t?

  (Quote)

Luke Muehlhauser January 3, 2011 at 9:52 pm

Robert,

Thanks for that.

  (Quote)

Robert Gressis January 3, 2011 at 9:53 pm

Patrick,

Yeah, I think you’re right. Basically, what I was trying to say is that there are four kinds of evidence for belief in the resurrection: philosophical, textual, archeological, and historical. If the McGrews can establish that the historical evidence raise the probability of R, they will have done a lot, because that’s one big chunk of the evidence, so they not only give an argument that raises the probability of R, but one that raises it fairly substantially. This was why I quoted this line:

“We shall argue that there is significant positive evidence for R, evidence that cannot be ignored and that must be taken into account in any evaluation of the total evidence for Christianity and for theism.”

I took the phrase “significant positive evidence” to mean not just that R’s probability is raised, but that it is raised substantially, because the historical evidence is a big chunk of the evidence.

  (Quote)

Patrick January 3, 2011 at 10:28 pm

“I took the phrase “significant positive evidence” to mean not just that R’s probability is raised, but that it is raised substantially, because the historical evidence is a big chunk of the evidence.”

Well, I can only comment on what’s been quoted so far. I haven’t read the actual paper. So I can say that Carrier’s description of the paper matches what you quoted.

That being said, if Carrier is right and they didn’t actually give priors, then no. You can’t say that they substantially raised the probability of the resurrection. Logic simply doesn’t permit that. Without at least some consideration of priors, the best you can say is that the increase is greater than 0%.

Imagine that you’re a detective, and you are investigating a murder. You learn that Joe Somebody owns a gun of the same model as the one that killed the victim.

With this information you can say that you have increased the likelihood that Joe Somebody is the killer. But you can’t say how much you increased it, or what its been increased too, without some other information- how likely was it before now that Joe was the killer? How rare are guns of this model?

1. Joe is your top suspect, and the gun is very rare. You’ve probably solved the case.

2. You previously thought Joe was probably innocent, but only 3 guns of this model were ever produced, and the other two are destroyed. You’ve dramatically increased the probability that Joe is the killer.

3. Joe lives on the other end of the country and has no connection to the case. You only found him in a gun registry, because the gun in question is very rare. You’ve slightly increased the probability, but probably not by much.

4. Joe was your prime suspect, but the gun is a dime a dozen. You’ve increased the probability, and yes, its high, but the gun didn’t contribute much.

5. Joe lives on the other end of the country, and the gun is a dime a dozen. You’ve increased the probability, but by such a small amount that no one cares.

6. Joe had motive to kill the victim, and the gun is incredibly rare, but hospital records say that Joe died 3 days before the victim was shot, and after you found the gun you had him exhumed and verified that Joe is indeed dead and buried. You’ve still increased the probability! Maybe some really weird scenario happened, like.. Joe faked his own death to avoid prosecution, but his accomplice poisoned him and put him back in the hospital after the murder was committed, and this is all going to make a really great TV show someday when you sell the rights! But probably not, and probably the gun contributes virtually nothing to the probability of Joe’s guilt.

See the problem? In every cares Bayesian evidence of Joe’s guilt is provided, but whether that evidence is important can’t be evaluated without context.

Give me enough facts about you, and I’ll actually do some math and provide “significant evidence” that you murdered Abraham Lincoln. I’ll just bracket some facts… apply Bayes but never give priors… no big deal, right?

  (Quote)

Haecceitas January 4, 2011 at 12:23 am

“Do you have a source for this? I know I used Wikipedia, but it was an early check. I was wondering what the story is on this because it seems to be extremely important in regards to the issue of the empty tomb. “

One online source that I can think of would be http://www.craigaevans.com/Burial_Traditions.pdf but I’ve read something similar elsewhere too. Might have been in Byron R. McCane’s book “Roll Back the Stone” though I’m not quite sure.

“In fact, the article does mention that only one buried body has ever been found where cause of death was crucifixion.”

Yeah, but it is questionable whether the remains of a crucifixion victim would be identifiable as such under normal circumstances. In the case of Yehohanan (probably the one referred to in the Wikipedia article) one nail was left intact in the remains with a piece of wood because it had hit a knot and bended so that a removal was too difficult.

  (Quote)

Haecceitas January 4, 2011 at 12:33 am

I have to wonder what exactly Carrier means when he says that the criteria are “logically invalid” since they are supposed to be inductive rather than deductive. Nothing that I’ve heard him says goes even near to demonstrating that they can’t be used inductively. Though obviously I do agree with him that one can’t assume that “this piece of evidence that raises the probability that hypothesis H is true” means “this piece of evidence makes the truth of H probable”. There may be some scholars who confuse the two but I wouldn’t think that there are very many.

  (Quote)

Patrick (the Christian one) January 4, 2011 at 2:39 am

As for the statements against interest, the apostle Paul had every reason not to believe in the Resurrection if there was even the slightest possibility that it could not have happened. Not only was his belief the cause of much hardship (see 1 Corinthians 4,9-13, 15,30-32, 2 Corinthians 11,16-33), but in addition he had to fear that in the end he would turn out to be a false witness about God (1 Corinthians 15,15).

  (Quote)

Patrick, to Patrick January 4, 2011 at 7:44 am

You’re misreading 1 Corinthians 15, 15. Enormously so. And you’re misunderstanding statements against interest.

A statement against interest is where I claim something that I have reason to cover up or hide. The theory is that statements of this nature are more reliable than statements that I would have reason to invent.

To claim that any of Paul’s statements were against his interest, you’d have to establish that the trials Paul suffered when he converted actually happened. And then you’d have to establish that the benefits of founding an entire religion weren’t compensatory. I think you’ll have trouble with both of these. At the very least, you have to overcome the problem that its Paul’s testimony that establishes the hardships that are being used to lend credence to Paul’s testimony. That is probably insurmountable without new historical discoveries.

If you wanted to support your point about 1 Corinthians 15, 15, you should have cited to Job, since it actually says something in the realm of what you wanted from 1 Cor. 15, 15. But even then that wouldn’t work because if Jesus was raised from the dead and Paul said he wasn’t, that would also make him a false witness about God. And given the overall Christian story, would also have the effect (adverse to Paul’s interests) of eternal damnation. So by similar logic to yours, if Paul thought there was even the slightest possibility that the resurrection could have happened, it would be against Paul’s interest to give voice to doubts.

  (Quote)

Patrick, to Patrick January 4, 2011 at 10:38 am

It was certainly in Paul’s interest not to suffer pointlessly and in addition to this risk eternal damnation due to acting as a false witness about God. My point is that if the Resurrection had not happened that’s just what Paul might have had to expect. So he had to be extremely sure that his testimony about the Resurrection was valid. As a matter of fact, he not only believed in the Resurrection, but according to 1 Corinthians 9,1 and 15,8 had a first hand experience of the risen Jesus.

Unlike what you write, Paul indeed gave voice to doubt concerning the Resurrection, albeit hypothetically. This can be seen from 1 Corinthians 15,12-19. Furthermore, in the New Testament accounts about the Resurrection we find more often an attitude of scepticism than one of uncritical assent (Matthew 28,17, Luke 24,9-11, John 20,24-29). Interestingly it contains even one of the alternative explanations for these events, namely that the disciples stole Jesus’ body (Matthew 28,11-15).

In my opinion Paul’s accounts of his own hardships are reliable. He was writing to people who knew him and the circumstances he lived in. It seems to me very unlikely that what he wrote about his circumstances would not correspond to reality.

I don’t see what the benefits of founding an entire religion are. The only thing I can think of would be winning fame after one’s death. But I doubt that Paul would have valued this higher than his eternal fate. Moreover, according to Philippians 3,3-10, before his conversion Paul was a well-respected member of the Jewish community, so he didn’t have to become a Christian to win fame.

  (Quote)

Patrick (Christian) January 4, 2011 at 2:53 pm

According to 1 Corinthians 9,3-18 and 1 Thessalonians 2,9 the apostle Paul was not looking for financial advantage. Therefore, such a motive for his activities can be ruled out.

  (Quote)

Patrick (Christian) January 4, 2011 at 2:59 pm

According to Bayes’ theorem with the number of independent witnesses the probability of an event grows exponentially. Richard Carrier expresses this as follows (http://commonsenseatheism.com/?p=9593):

“And the reason having independent witnesses is important is that their probabilities multiply, hence if the probability of error for a witness is x, then having two independent witnesses gives a P(error) = x^2; three gives x^3; etc. Geometric progression. Thus multiple independent witnesses produces massive increases in reliability very quickly. Whether that’s enough will depend on their independent reliabilities, and the improbability of the thing being claimed.”

In a paper entitled “A Bayesian Analysis of the Cumulative Effects of Independent Eyewitness Testimony for the Resurrection of Jesus Christ” (http://www.johndepoe.com/Resurrection.pdf) John M. DePoe expresses a similar point of view, as can be seen from the following quote:

“The effects of multiple, independent testimony on the posterior probability of an event are striking. No matter how much more probable it is that an event does not occur than that it does, given a sufficient number of moderately reliable independent witnesses testifying that the event occurred, the posterior probability of the event will go up exponentially as n increases and will, in the limit, become arbitrarily close to certainty.”

DePoe applies this insight to the Resurrection and consequently to a supernatural event, and I don’t see why it shouldn’t also apply to such events.

  (Quote)

Patrick, the other one January 4, 2011 at 3:19 pm

There’s so many problems with this that I don’t know where to start.

Lets just go with two.

1. If I accept DePoe’s theory I’d have to become a follower if Indian mysticism, because we have access to the testimony of WAY more witnesses to alleged miracles of Indian mystics than there we have for Jesus, who has a few hundred if you don’t understand math or history, or zero if you are biblically literate enough to know that the gospels weren’t actually written by the apostles, and that Paul never claimed to have met Jesus while he was alive.

2. Many alleged miracles come from incompatible religious traditions. This provides us with information that can allow us to calibrate how reliable we believe testimony of miraculous to be. It also creates a ceiling- take the maximum number of miraculous events that can be true at once, and divide by the total. Whatever we set the reliability of witnesses to, it cannot allow us to verify a greater percentage of miracles than that number.

  (Quote)

einniv January 4, 2011 at 4:08 pm

Yair,

It took me a minute to understand what was being said re: “Anyone that says that under atheism the probability of fine tuning is 100%, for example, has stepped outside of the scientific definition of “fine tuning” and demonstrated a profound lack of understanding.”

I would suggest you might be the one misunderstanding here. First ,I would note that it appears Richard is saying that was not just his statement but Collins’ also. Fine tuning in the scientific sense means that the constants and laws of physics are within a (some argue narrow) range that would allow for long star life and thus higher atomic number elements which are required for life as we know it. If there is no god or gods directing the processes of the universe (or creating man/observers ex nihilo) then, when examined by observers, the universe must be found to have the properly fine tuned properties (this is essentially the weak anthropic principle). Finding anything else strongly suggest there is something else causing the observers to be here. For example, if we found that the constants and laws of physics showed that long star life and therefore the higher elements were not possible, it would strongly suggest that stars and the higher elements did not happen by natural processes. If we assume there is no god then we better find fine tuned constants or else we have discovered a contradiction. By contrast if we assume there is a god then we don’t need to find fine tuned constants since god can just make observers happen anyway.

Hope this helps.

  (Quote)

Eric January 4, 2011 at 5:34 pm

I think both Patricks need to be a bit more with their names… Suggestion Patrick C (Christian) and Patrick A (Atheist)

Patrick C-
In a paper entitled “A Bayesian Analysis of the Cumulative Effects of Independent Eyewitness Testimony for the Resurrection of Jesus Christ” (http://www.johndepoe.com/Resurrection.pdf) John M. DePoe expresses a similar point of view…

I actually Responded to this in our lengthy discussion from “Natualism of The Gaps
I assume, since you are posting John De Poe’s article, then you have a response to this….
Personally I have rethought this a bit and have made the purpose of this argument to show the problems with SPECIFIC supernatural explanations. You may possibly be able to tell that a resurrection happened, but you may be unsure if the explanation is actually of Christian descent. Although I still have issues “confirming the supernatural,” I now believe it is possible to rationally believe in the supernatural, due to epistemological concerns with “absolute certainty.” However, because natural laws are descriptions of reality, meaning reality just about always conforms to these descriptions, people can be easily fooled into accepting a miracle when none has actually happened, and no apriori reason to accept that a specific miracle will happen at a given time, the prior probability of a specific miracle actually happening is INCREDIBLY LOW. And, as Carrier said, “You could take a 1% chance and make it 10% and you have multiplied the probability 10 times which is a huge increase in probability but it is still 90% chance you’re wrong.”

  (Quote)

Yair January 5, 2011 at 12:36 am

einnev:

Fine tuning in the scientific sense means that the constants and laws of physics are within a (some argue narrow) range that would allow for long star life and thus higher atomic number elements which are required for life as we know it.

Fine tuning is precisely the narrowness of the range. The fact that the universe must (given naturalism and our own existence) have laws of nature that can produce and sustain life is the weak anthropic principle. This is not the physical finding of fine-tuning. If it were, then it wouldn’t be much of a finding, now would it? The finding is, specifically, that the constants in the laws of nature are such that even small deviations from the real values would imply a universe where no life.

Carrier (and you and Collins apparently) is not taking care to separate the anthropic from the fine-tuning argument. Under atheism, “we better find [anthropic] constants”, not “fine tuned” constants. The laws of nature have to be such that they will produce life naturally, but why must the constants in them be fine-tuned?

Carrier’s discussion therefore seems to completely misunderstand the nature of the question. The fine-tuning argument is an evidential argument, and Carriers misunderstands the nature of the evidence at hand. I therefore suspect the following will be true of his treatment of the fine-tuning argument:

* Carrier will fail to defend the fine-tuning phenomena under naturalism. He won’t produce an explanation for why fine-tuning would be observed in a naturalistic universe. I think arguments for this may be possible, for example it sounds at least initially plausible to me that the scientific requirement of constructing the simplest model to describe a complex phenomena might generically lead to sensitivity in the model’s constants. I can’t make this into a rigorous argument, however, and I would have been interested in seeing it (or some other good argument) laid out. I fear Carrier does not inspire confidence that he would be able to supply such an argument, as he seems to misunderstand what fine-tuning is.

* Carrier will fail to attack theism for not supplying a satisfactory explanation for fine-tuning, which topples the validity of the entire “fine-tuning” argument. As far as I know, theism cannot provide any real explanation beyond “this is the aesthetic choice of god”, which is an ad hoc assumption. The immediate theistic explanation that comes to mind, that fine-tuning is put there by god so he could create life, contradicts god’s omnipotence (he could have created other life-supporting laws of nature, ones without fine-tuning). The second immediate explanation, that fine-tuning is there to signify His existence, is assuming the consequent (why would it signify God’s existence?).

* Carrier will fail to adequately defend the atheistic weak anthropic principle, misunderstanding the question the theist raises. Carrier will insist that we know that humans exist and this should be taken into account, thus producing WAP, instead of addressing whether atheism by itself implies life. He will insist that under atheism our existence is background knowledge, and refuse to consider the question that is really at the heart of these lines of arguments – why is reality the way it is? Treating our existence as background knowledge doesn’t allow you to consider this question, which is the question the theist is trying to get at. You can say that Carrier is not applying the principle of charity in this regard.

* Carrier will fail to acknowledge the problem of past knowledge in Bayesianism in general and here in particular. This is hinted at by his insistence on the importance of putting the datum in the Background Knowledge compartment rather than as Evidence; under a reasonable epistemic approach, the order in which we acquire evidence should not matter.

If my predictions are wrong, I’ll be glad to know of it. If anyone has read Carrier more extensively on this and I am mistaken, do tell! As it is, the short thoughts provided in the above interview don’t exactly inspire confidence.

  (Quote)

Eric January 5, 2011 at 2:05 am

Yair -
The finding is, specifically, that the constants in the laws of nature are such that even small deviations from the real values would imply a universe where no life.

The laws of nature have to be such that they will produce life naturally, but why must the constants in them be fine-tuned?

He won’t produce an explanation for why fine-tuning would be observed in a naturalistic universe.

I guess I’m failing to understand. Given a naturalistic interpretation of the universe, we understand that specific chemical/physical reactions produce specific results. If we were to vary the chemicals in the reaction, or the properties of these chemicals, by the slightest bit we would achieve different results. In a naturalistic universe, life is the result of various chemical/physical reactions. So therefore, we would expect that if we varied the properties of chemicals (ex: by adjusting the universal constants) by even the slightest bit, we would expect a completely different reaction than life. So under naturalism, we would therefore expect that life would have an exceedingly small range of values for these universal constants.

  (Quote)

snafu January 5, 2011 at 3:19 am

There seems to be a bit of debate about this point:

RICHARD: Again, that’s completely useless information. [laughs] We could go from 1% to 10%, that’s 10 times more probable. Yeah, that makes it more probable. It’s still 90% chance it’s false.

My take on the McGrew resurrection article is that it’s arguing for the resurrection specifically (rather than theism in general). RC is correct in saying that they only look at the increase in probability that the evidence gives: see the ubiquitous use of ‘Odds notation’ and Bayes factors in the source.

However, the suggestion is that the evidence is so good that it could overcome literally any prior improbability:

[p40] Sheer multiplication through gives a Bayes factor of 10^44, a weight of evidence that would be
sufficient to overcome a prior probability (or rather improbability) of 10^-40 for R and leave us
with a posterior probability in excess of .9999.

The elephant here is one they recognise: independence of random variables. They take their Bayes factor of 10^squillions, then pile on arguments that, in fact, this underestimates the true factor, because (I generalise pp 41,42) the variables are anticorrelated.

So: we have 11 disciples. Disciple 1 was martyred, which makes disciple 2 less likely to report a resurrection appearance. And so on down the chain; the conclusion is that the assumption of independence underestimates the Bayes factor, so the resurrection is even more likely.

This, I claim, has now left the world of rational analysis. The old line of “The disciples died for their faith you know!” has been dealt with many times before, and all they’re doing is attempting a sliver of respectability by attaching it to a dodgy probability calculation that assumes independence, then using apologetics to argue that it makes the resurrection more likely after all.

There are lots of other things along the same lines, eg p41

The women’s testimony is
essentially independent of that of the thirteen male witnesses.

Well, it is if you ignore the fact that both testimonies were channelled through the gospellers. And they’re completely trustworthy in regard to secular reported history aren’t they? Yeah, that genealogy of Jesus back to Adam is completely reliable. And there are no difficulties with the reliability of the infancy narratives, are there? etc.

I think I’ve said enough: my main point is that I don’t think Carrier’s “1% to 10%” line reports the paper well (hey, that’s allowed in an interview). But that doesn’t mean it’s a good paper.

By the way, Carrier is spot on about Akieda, Jeffreys and conditioning the fact that sentient observers exist inside the background evidence. But that’s a separate debate.

  (Quote)

Yair January 5, 2011 at 4:47 am

Eric,

In a naturalistic universe, life is the result of various chemical/physical reactions. So therefore, we would expect that if we varied the properties of chemicals (ex: by adjusting the universal constants) by even the slightest bit, we would expect a completely different reaction than life.

Well, that is an argument which I suspect Carrier won’t put forward, but is precisely the kind of argument that needs to be put forward.

However – why would slightly changing the constants in the basic laws of nature lead to such drastically different chemistry? Remember that these laws are far below the chemical level, they’re at the quark level, or else revolve around gravity which is again in a different. And in “any” naturalistic universe, they could be even lower and further from life. Even if they were at the level of chemistry, why must the change be qualitative? Isn’t it more plausible that it will be a small quantitative change? It seems at least plausible that the chemistry won’t qualitatively change unless we change fundamental constants by a lot. Indeed, this was the physicist’s intuition, which is why fine-tuning is an interesting finding to begin with – it marks a departure from the physicists’ prior assumptions. The natural thing to expect is that when you change the gravitational constant a little bit, say, it will lead to slightly larger solar systems, slightly change the average size of stars, and so on – not that it will lead to a universe with no stars!

Now, I suspect you are basically right – life is a highly complex phenomena that isn’t “designed” [by evolution] to be robust to changes in the laws of nature, so we should expect it to be sensitive to changes in the laws of nature. However, as a matter of historical fact this was not the scientist’s expectations, and as a matter of apologetics this needs to be proved systematically (perhaps using complexity theory) if it is to be considered seriously. Vague intuitions are not enough. Perhaps we should expect high-level emergent phenomena to be in general resistant to low-level changes? I really don’t know, and this needs to be put in much more accurate terms for it to become a substantial argument. Again, my suspicion is that it depends on circumstances but in general the requirement for a simple descriptive low-level model of complex phenomena is key to the conclusion that the high-level description will be sensitive to low-level changes.

  (Quote)

Yair January 5, 2011 at 5:16 am

snafu,

Carrier is spot on about Akieda, Jeffreys and conditioning the fact that sentient observers exist inside the background evidence. But that’s a separate debate.

Why is he spot on about “conditioning the fact that sentient observers exist inside the background evidence”? To put it another way – doesn’t it seem odd that it even matters whether we put this datum in the past knowledge or current evidence piles? Shouldn’t the rational conclusion not depend on when we picked up this piece of evidence? I think it appears to matter only because the two approaches are answering different questions.

To put things in a Bayesian framework:

By putting things in the background knowledge pile, Carrier is only considering the theories (T) that include humans (TH). He (mistakenly, but that’s another matter) notices that within this probability space 100% of the atheistic theories and less than 100% of the theistic theories include fine tuning, and therefore concludes that fine-tuning is more probable under atheism (again mistakenly, since he ignores the issue of the probability measure).

The theist, on the other hand, is considering the wider probability space T, which includes theories including humans (TH) but also ones not including humans (its complementary, T~H). The theist notices (or rather, wrongly reasons) that theist theories within T frequently include fine-tuning whereas atheist ones don’t, so concludes that fine-tuning is more probable under theism.

The irony is that neither are right. Under both theism and atheism we are in a state of ignorance about how likely is fine-tuning (or life, or humans), and ignorance cannot be represented as a probability measure and therefore a Bayesian analysis is inappropriate. But I think this is a good example of the difficulties and problems of using Bayesian analysis, so at least it’s useful for something!

  (Quote)

Yair January 5, 2011 at 5:24 am

(I’d add to the above that one should notice the theist is still wrong under his own Bayesian analysis, since he needs to further restrict the probability space with the evidence E that humans do exist in our universe, so he will be forced to conclude that our universe (one including both fine-tuning and humans) are more probable under atheism. So, the theist is guilty of of the fallacy of suppressed evidence. However, this does not depend on whether he puts the datum in the background information or not, as it shouldn’t.)

  (Quote)

Eric January 5, 2011 at 5:42 am

Yair –
However – why would slightly changing the constants in the basic laws of nature lead to such drastically different chemistry? Remember that these laws are far below the chemical level, they’re at the quark level, or else revolve around gravity which is again in a different. And in “any” naturalistic universe, they could be even lower and further from life. Even if they were at the level of chemistry, why must the change be qualitative? Isn’t it more plausible that it will be a small quantitative change? It seems at least plausible that the chemistry won’t qualitatively change unless we change fundamental constants by a lot. Indeed, this was the physicist’s intuition, which is why fine-tuning is an interesting finding to begin with – it marks a departure from the physicists’ prior assumptions. The natural thing to expect is that when you change the gravitational constant a little bit, say, it will lead to slightly larger solar systems, slightly change the average size of stars, and so on – not that it will lead to a universe with no stars!
Now, I suspect you are basically right – life is a highly complex phenomena that isn’t “designed” [by evolution] to be robust to changes in the laws of nature, so we should expect it to be sensitive to changes in the laws of nature. However, as a matter of historical fact this was not the scientist’s expectations, and as a matter of apologetics this needs to be proved systematically (perhaps using complexity theory) if it is to be considered seriously. Vague intuitions are not enough. Perhaps we should expect high-level emergent phenomena to be in general resistant to low-level changes? I really don’t know, and this needs to be put in much more accurate terms for it to become a substantial argument. Again, my suspicion is that it depends on circumstances but in general the requirement for a simple descriptive low-level model of complex phenomena is key to the conclusion that the high-level description will be sensitive to low-level changes.

If you take into account these small changes at these basic levels will affect EVERY element and even the most simple of properties of these elements, and that life, for example, needs many of these properties to be the same, then no, you would expect a near exponential rate of change, as complexity goes to infinity. I would see no reason to expect high level phenomena to be resistant to low level changes. This would seem to be the exception to the rule, where the changes must “balance out.”
We also need to be careful of our use of the term “slight variations.” They are only slight based on the level of measurement we use, which is almost always standardized based on what was useful at the time the standard of measurement was discovered. For example, the Farad is, by today’s standards, extremely large and the vast majority of capacitors in use range in the micro to nano Farads. Making the change from a 1 nF to 1 microF capacitor will make all the difference in a circuit. When we talk about changes in the physical constants, maybe changes that are measured as very small are actually very big. When we are talking about life ceasing to exist or stars ceasing to exist as a result of these changes, we are changing our system of measurement from, say meters^3 kg^-1 s^-1, to at best, the amount of times a phenomena appears. So it may be useful for the person thinking these changes are relatively big to justify why these changes are BIG in comparison, when no system of comparable measurement exists. It is possible the early physicist didn’t realize the changes were not THAT small, 0nly because the physicist chose a system of measurement that better applied to larger phenomena.
So I fail once again to see why we would expect, even intuitively, that fine tuning would be unlikely under naturalism. It is at least as much a burden of the theist to show that fine tuning is improbable under naturalism as it is for the naturalist to say it is probable. It seems as though, the more we understand about the universe, the more the probability of naturalism being true given a an arbitrary fact increases.

  (Quote)

Patrick (C) January 5, 2011 at 7:08 am

Patrick,

the fact that many alleged miracle claims come from incompatible religious traditions is no problem from a Christian point of view. According to the Bible miraculous claims or even genuine miracles do not prove the truth of the religious claims the (alleged) miracle workers support (see e.g. Exodus 7,10-13 or 2 Thessalonians 2,9). So for a Christian it is possible to accept supernatural claims of other religions without having to accept the doctrines of these religions. So, even if there are more testimonies of miraculous events in another religious tradition, this doesn’t mean that you have to adopt the other religious tradition.

With respect to the question whether or not there are reliable miracle accounts in the Bible we can look at the Pauline epistles that are generally regarded as genuine. In passages such as Romans 15,18-19, 1 Corinthians 12,9-10, 2 Corinthians 12,12 or Galatians 3,5 we find first hand indications of the performance of miracles in the Christian churches Paul was addressing.

  (Quote)

Patrick (C) January 5, 2011 at 7:42 am

In my view there is a contradiction between the high regard for multiple, independent testimony, as expressed by Carrier and DePoe, and the disregard for testimonial evidence, as expressed by many atheists (www.skepdic.com/testimon.html).

  (Quote)

Patrick who is not Patrick January 5, 2011 at 8:18 am

I’m going to bow out of this conversation now that I know that you believe that demons and supernatural beasties are constantly performing miracles in the present day.

Sorry.

(Independent in this context probably doesn’t mean what you think it does, but whatever.)

  (Quote)

snafu January 5, 2011 at 9:08 am

Ach…drawn into the other debate…

Why is he spot on about “conditioning the fact that sentient observers exist inside the background evidence”? To put it another way – doesn’t it seem odd that it even matters whether we put this datum in the past knowledge or current evidence piles? Shouldn’t the rational conclusion not depend on when we picked up this piece of evidence?

In what I write below, I’ll try and make a very clear distinction between evidence (E) and background knowledge (K). The letters refer to the standard notation used when expaining Bayes’ Rule: P(E|HK)=P(E|K)P(H|EK)/P(EK).

I agree that the order in which you gather evidence (E) doesn’t matter. If my prior probability that Jones did the murder is 10%, then it doesn’t matter which order I consider the DNA evidence (E1) and the witness testimony (E2). Applying Bayes twice will get me to (say) 60% in either case.

The issue here is whether we place L = “sentient life exists in the universe” as an E or a K. Carrier’s/Sober’s/etc point is that you can’t put it anywhere but inside K (background knowledge). The mere fact that you’re a rational being sitting here contemplating the universe means that L must be true. It’s the paradigmatical example of background knowledge K, because we literally cannot observe it not being true.

It’s a fairly simple matter to crunch the algebra and show that if P(E|K)=1 (E is observed for sure), then E cannot support hypothesis H. It’s a given, and has to be part of K.

(Yes, the problem of prior probabilities is raising its head: we have background knowledge that we have to condition on, but Bayes can’t bootstrap itself into existence to provide the answer as to how to set them. We have to find another way. Digression over.)

Jeffreys goes even further and argues that if you fail to condition on everything you know to be true (ie you selectively miss bits out of K that you know to be true really), then you’re being inconsistent with your inferences somewhere.

Note also that I’ve phrased L=life exists very carefully, and I didn’t use the words “fine tuned”. In my mind the latter is a probability claim that says you know something about the probability distribution that you’re picking constants out from. But, I’m afraid we don’t…and I’m pretty squeamish about positively assigning probabilities based on ignorance. Jeffreys does assert an equivalence between the two (search for ‘a form of F’ in the paper). In my opinion it’s probably the point that’s most open to attack, but we’re talking philosophy/epistomology here. Not probability/statistics, where Jeffreys knows his stuff backwards.

  (Quote)

Yair January 5, 2011 at 10:46 am

Eric,

It is at least as much a burdenof the theist to show that fine tuning is improbable under naturalism as it is for the naturalist to say it is probable.

On this, at least, we’re in full agreement. I don’t find the “burden of proof” idea too useful, and think both sides need to argue for their position as much as possible regardless of it.

We also need to be careful of our use of the term “slight variations.” They are only slight based on the level of measurement we use, which is almost always standardized based on what was useful at the time the standard of measurement was discovered.

While there are fine-tuning examples of this sort, most are focused on examples involving dimensionless parameters, whose value does not depend on the choice of units.

If you take into account these small changes at these basic levels will affect EVERY element and even the most simple of properties of these elements, and that life, for example, needs many of these properties to be the same, then no, you would expect a near exponential rate of change, as complexity goes to infinity. I would see no reason to expect high level phenomena to be resistant to low level changes. This would seem to be the exception to the rule, where the changes must “balance out.”

Well, this makes sense if you consider life precisely as we know it. However, the arguments are generally that no life could exist, period. The idea is, basically, that no complex chemistry and/or appreciable time for evolution could occur. I don’t see why small changes to the underlying constants should necessitate the large qualitative change of having no complex chemistry whatsoever. I would think it will just create, well, different universes, perhaps only slightly different from our own’s but even if not then not necessarily less complex.

These are, at least, my intuitions. I’m not arguing they are correct; all I’m saying is that it’s far from obvious to me, and I would have liked to see a thorough explanation why fine-tuning is indeed to be expected under atheism (or, for that matter, theism). My concern was merely that Carrier did not seem to even appreciate the problem.

  (Quote)

einniv January 5, 2011 at 10:49 am

Yair,

You make a good point about the semantics of “fine tuned” vs “anthropic” but I don’t think it matter as much as you think. The reason I said “some argue narrow” in my comment was that the narrowness itself is disputed. Victor Stenger, for one, has done some simulations that show that long lived stars and higher elements occur with a much wider range of constants than is sometime suggested. This is not even considering the possibility of other laws of nature but just the ones we know. Does that still qualify as “fine” tuned or is it “somewhat coarsely” tuned now?

Still if you want to be a bit more systematic about it then I would suggest reading the Ikeda & Jeffreys page linked above that Richard referred to. I think you will find they address all the relevant bits you brought up, including the question of why life existing must be in the background information. They look at the results of assuming, not specific probabilities for a life friendly, life inhabited universe on naturalism and on not naturalism, but what happens when one probability is much larger than the other or when they are roughly the same. As they show it is the ratio of these prior probabilities that matters and not the absolute values.

  (Quote)

Yair January 5, 2011 at 10:55 am

snafu,

Ach…drawn into the other debate…

Sorry… :)

The issue here is whether we place L = “sentient life exists in the universe” as an E or a K.

But why is that an issue? What difference does it make?

Carrier’s/Sober’s/etc point is that you can’t put it anywhere but inside K (background knowledge).The mere fact that you’re a rational being sitting here contemplating the universe means that L must be true.It’s the paradigmatical example of background knowledge K, because we literally cannot observe it not being true.

I don’t see why that means I have to include it in the prior probabilities instead of the evidence. You have piece of reasoning showing you that L must be true; fine. Why does it matter if I constrain the probability space before applying Bayes theorem on other propositions (E) or after it? The structure of the probability space and the part of it I end up considering doesn’t change as a result of this choice, so why does it matter?

The whole sharp demarcation betweein “background knowledge” and “evidence” seems fishy to me. We need to consider all evidence in light of our priors, it doesn’t matter if some of that evidence is later put in the background knowledge or put in as evidence on top of a leaner background knowledge or whatever.

  (Quote)

Reginald Selkirk January 5, 2011 at 1:46 pm

Do you have the capability to correct the manucsripts?

Origin has these commentaries on various gospels.

Should be Origen.

  (Quote)

Reginald Selkirk January 5, 2011 at 2:15 pm

Patrick, to Patrick: I don’t see what the benefits of founding an entire religion are…

I’ll remember that the next time you argue against Mormonism, or Scientology, or any of the neo-Christian cults (Jim Jones, David Koresh, Mary Baker Eddy, Charles Taze Russell, Ellen White,…), or any of the Buddhist – heck, any of the non-Christian sects, or the UFO cults (Heaven’s Gate, etc.) Since you don’t see any benefit, it is a wonder that so many other people have actually founded religions which you consider to be false.

Why does this argument apply to Paul, but not to any other religious founders?

  (Quote)

Reginald Selkirk January 5, 2011 at 2:20 pm

Eric: He won’t produce an explanation for why fine-tuning would be observed in a naturalistic universe.

I guess I’m failing to understand. … So under naturalism, we would therefore expect that life would have an exceedingly small range of values for these universal constants.

Does that help? Assuming naturalism, what is the probability that we would observe one of those other lifeless universes?

  (Quote)

Eric January 5, 2011 at 9:59 pm

Yair –
While there are fine-tuning examples of this sort, most are focused on examples involving dimensionless parameters, whose value does not depend on the choice of units.

When talking about the six ratios, I would like to hear an argument stating that the effects of these changes are “large” compared to the “small” changes in these ratios.
We also need to take note of an old physical principle. Take a particle in static or dynamic equilibrium. It takes all forces acting on it to “cancel out” perfectly in order for the object to stay in equilibrium. If one force were to change just barely, all else being equal, the object would accelerate til it reaches an incredibly fast speed where relativity would become more than negligible and prevent acceleration.

Yair –
Well, this makes sense if you consider life precisely as we know it. However, the arguments are generally that no life could exist, period.

It depends on whether or not you find life, which is the result of INCREDIBLY complex chemistry, to be resilient to changes that affect its entire chemistry. If you apply voltage just slightly too high to an incredibly complex circuit, it could easily fry the entire circuit. Now I know this analogy is imperfect but it shows that neither the “resilient life” principle, nor the “fragile life” principle necessarily have a higher prior probability prior to the discovery that life is fragile. So how can either possibility necessarily be less expected given a naturalistic universe (a universe given the existence of only matter, energy, space, time, and the properties thereof). It may be a challenge for theism (given that a creator could have created the universe in many ways, yadda yadda…) Neither seems to be a challenge to naturalism (of course this may be the root of the whole problem with the Bayesian argument for theism over naturalism, given the observation of fine-tuning).

Yair –
The idea is, basically, that no complex chemistry and/or appreciable time for evolution could occur. I don’t see why small changes to the underlying constants should necessitate the large qualitative change of having no complex chemistry whatsoever. I would think it will just create, well, different universes, perhaps only slightly different from our own’s but even if not then not necessarily less complex.

Once again, these are small changes which affect EVERY particle in the universe and their properties. So i’m not sure how it would be unexpected to see mass changes like no complex chemistry. Saying it’s qualitatively large comparable to the change is not convincing and goes back to the point I made earlier with no standard of size of cause versus size of effect. Is one micro meter small compared to 1000 Kgrams? They are measured on two different scales. And if one is dimensionless, justifying the “small” change – “large” effect idea is more challenging.

It seems as though fine tuning versus coarse tuning seems to be, at WORST, 50/50 under naturalism and exceedingly improbable under theism. I would like to hear Carrier’s explanation of why it is probable given naturalism.

  (Quote)

Eric January 5, 2011 at 10:00 pm

@Reginald Selkirk
I’m confused as to what you just said… lol

  (Quote)

Yair January 5, 2011 at 11:54 pm

einniv,

Yair,You make a good point about the semantics of “fine tuned” vs “anthropic” but I don’t think it matter as much as you think.

Perhaps. It seems to be indicative of failing to understand the issue.

The reason I said “some argue narrow” in my comment was that the narrowness itself is disputed.

Certainly. But for the sake of the argument, I suggest proceeding as if the finding is valid.

Still if you want to be a bit more systematic about it then I would suggest reading the Ikeda & Jeffreys page linked above that Richard referred to

That was certainly an interesting read, thanks for the recommendation. However, even they fail to treat fine-tuning adequately. They see it as “only a very small fraction of possible universes can be life-friendly”, or a small P(F|N). However, under their own Bayesian epistemology the probabilities are Your subjective beliefs – a small P(F|N) merely says under Your subjective and unjustified probability space naturalistic universes are rarely life-friendly. The finding of fine-tuning, however, is evidence. It does not depend on your subjective assessments of probabilities. As the authors keep pointing out, it is important to consider all evidence, and this is how fine-tuning should be considered – as evidence.

In their terms, the theist claims that P(~N|FT&F&L)>P(N|FT&F&L), essentially because P(FT|~N) is large whereas P(FT|N) is small. While this is formally a fallacy, I don’t think it would take much to make it a valid argument. What he is saying that within the P(F&L) subspace, most naturalistic universes are not fine-tuned whereas most theistic (as in “my brand of theism”) ones are. What is therefore needed is an atheistic argument for why fine-tuning is to be expected under naturalism/theism and in particular in life-friendly and life-containing universes; and conversly arguments against those positions. I don’t see such arguments.

  (Quote)

Yair January 6, 2011 at 12:06 am

Eric,

It seems as though fine tuning versus coarse tuning seems to be, at WORST, 50/50 under naturalism and exceedingly improbable under theism. I would like to hear Carrier’s explanation of why it is probable given naturalism.

We can keep exchanging intuitions about what’s likely, but I think we’re just highlighting the problem – it’s only intuitions, we don’t have a good sound argument to convince us either way. A more rigorous explanation of why fine-tuning is (or isn’t) probable given naturalism (or theism) is indeed needed.

I would mention that it is the intuition of most physicists (me included) that making small changes in the model’s (dimensionless) constants would not lead to qualitative changes except at critical points (including entire lines of phase transitions and so on). This is borne by years of fiddling around with various models. Changing the values in the Standard/Cosmological Model often does make qualitative changes, apparently, which is why it is seen as surprising. Perhaps this is just an ethnocenrism (often the changes are not significant in other qualitative measures), or maybe something else is afoot. Regardless, a more rigorous treatment of these problems is needed than mere intuitions.

  (Quote)

Eric January 6, 2011 at 9:22 am

@Yair
I think the biggest problem is that Fine Tuning seems to hold no challenge to the Naturalist model. Nothing in the naturalist model seems to suggest that Fine Tuning would ever be unexpected/expected. If it is merely the prior intuition of physicists from previous models that small changes would only affect qualitative properties at critical points, then observed fine tuning under the standard model should be a challenge to those assumptions under those previous models, not to Naturalism. Other fine tuning arguments seems to be a challenge to naturalism (but they seem to be arbitrary, like talking about the probability that the winner of the lottery played fairly or cheated), but this one seems to hold no challenge. Any given finding, be it fine tuning or coarse tuning seems to be equally expected, given Naturalism.
Or how about this: How could NATURALISM possibly lead someone to the prior expectation that life requires coarsely tuned constants? (think about whether Relativity was a challenge to naturalism)

  (Quote)

Reginald Selkirk January 6, 2011 at 9:29 am

Eric: We also need to take note of an old physical principle. Take a particle in static or dynamic equilibrium. It takes all forces acting on it to “cancel out” perfectly in order for the object to stay in equilibrium. If one force were to change just barely, all else being equal, the object would accelerate til it reaches an incredibly fast speed where relativity would become more than negligible and prevent acceleration.

Are you comparing like forces on an object from different directions to the existence of different forces such as electromagnetism, gravity, strong nuclear force & weak nuclear force? WTF?

  (Quote)

Reginald Selkirk January 6, 2011 at 9:34 am

Eric: I’m confused as to what you just said… lol

Assuming naturalism:
If a universe does not produce life, no one will be around to observe it. Therefore, the prior probability that an observed universe has conditions suitable for the production of life is 1.0.

  (Quote)

Eric January 6, 2011 at 3:14 pm

Reginald Selkirk -
Are you comparing like forces on an object from different directions to the existence of different forces such as electromagnetism, gravity, strong nuclear force & weak nuclear force? WTF?

It was basically an analogy to establish the basis of the intuition that seemingly small changes can cause seemingly large effects. It also points out how fragile the state of a particle can be where the smallest change in force COMPLETELY moves the object out of equilibrium. I was suggesting a similar level of fragility when it comes to the phenomenon of life. I am wondering if you had read any of my other posts, because the point of the analogy seemed quite obvious.

  (Quote)

Patrick January 6, 2011 at 4:08 pm

In my opinion it would be very interesting to apply the Bayesian analysis to paranormal events witnessed by several people such as the events surrounding Gottliebin Dittus or the “Rosenheim Poltergeist” (http://en.wikipedia.org/wiki/Rosenheim_Poltergeist). As for the former, an analysis of the events can be read in the chapter “The Events Surrounding Gottliebin Dittus” in the biography of Johann Christoph Blumhardt mentioned in my first post.

  (Quote)

The Atheist Missionary January 7, 2011 at 8:13 am

This is great stuff.

I just found the following comment (apparently written by Tim McGrew) at this blog: http://dangerousidea.blogspot.com/2011/01/richard-carrier-on-bayes-theorem.html

In very simple, non-math-expert terms: We are showing, piece by piece — and we are by no means done yet — that Richard Carrier is completely out of his depth with respect to the mathematics of elementary probability. He garbles the explanation of elementary concepts, and he fumbles the computation of his own chosen examples.

This is not a matter of scholarly disagreement over the interpretation of some bit of evidence or the relative merits of two competing hypotheses; it is not a wrangle over the preferability of two possible translations of a bit of Greek; it is not a clash of worldviews. It is much simpler than that. Carrier has not crossed the pons asinorum of elementary probability.

Now, there is no shame in this as such. Many fine people, including many good historians, have not mastered the basic mathematics of probability. But for someone who poses as an expert and describes as “crappy,” in its use of mathematics, a peer-reviewed article by people who actually do understand what they are doing mathematically, a demonstrable failure to understand extremely basic aspects of probabilistic reasoning must be — how to put this? — an inconvenience.

That’s all.

If Carrier and the McGrews ever go head to head, I would love to see it.

  (Quote)

bram January 7, 2011 at 2:03 pm

I wanted to comment about how Richard Carrier misrepresents the McGrew article about the resurrection and why I think that the argument doesn’t work. But snafu already did it probably better than I could have done.

So thank you snafu.

  (Quote)

Patrick January 7, 2011 at 3:03 pm

Reginald Selkirk,

it seems doubtful to me that founding a religion in itself provides any benefit for the respective religious leaders. In my opinion they either really believe what they promote or else they might expect advantages such as financial gain or fame.

I apply to Paul the same standard as to other religious leaders. As I pointed out in previous posts it’s just that I don’t see any earthly advantages Paul might have expected.

Besides, it seems doubtful to me that Paul regarded himself as the founder of a new religion. From 1 Corinthians 15,1-11 and Galatians 2,1-10 one can draw the conclusion that Paul thought of himself of having received a religious tradition already existing.

  (Quote)

Alfredo January 7, 2011 at 7:01 pm
einniv January 8, 2011 at 5:54 am

Yair,

As you’ve elaborated I have gotten a better understanding what you are saying but, I still think you have taken a wrong turn in wanting to use fine tuning as evidence. The I&J article is saying (I think much as you are):

Life friendly (F) is something we can observe and consider as evidence. We can look at the fundamental constants, run them through the laws of physics and say , Oh look long star life, etc.

On the other hand fine tuning is inherently a statement about a probability distribution. That the constants are 1, 2 & 3 don’t tell us anything unless evaluated against the entire set of possibilities.

—-
Assuming naturalism there is a set of possible universes. We don’t know what that set is but it exists. We can drill down in to each one and decide if it is life friendly. From within each individual universe we can look at the constants and laws and see if life is possible (naturalistically). On the other hand we cannot drill down in to each universe and, from within that universe, decide if it is fine-tuned (since you can’t observe all universes from within a single universe).

Becuase of this it is simply not possible for fine-tuned to be evidence since evidence is something you observe. That is why I&J say “the probability that a randomly-selected universe would be “life-friendly” is very small, or in mathematical terms, P(F|N)<<1. Notice that this condition is not a predicate like L, N and F; Rather, it is a statement about the probability distribution P(F|N), considered as it applies to all possible universes. For this reason, it is not possible to express the “fine-tuning” condition in terms of one of the arguments A or B of a probability function P(A|B)”

Because of this , the only proper response to a claim that fine-tuning has been observed (i.e. is evidence) is to say “bullshit. not unless you are god sitting outside of all possible universes.” (exactly what Carrier said in the interview). You can’t simply just grant this claim as it is impossible for the distribution of all universes to have been oberved from within our single universe. Sure, if we grant them the ability to do the impossible we might get some different result to our analysis but so what?

Then (I think) you are saying (paraphrased) “well if we could come up with a completely elaborated theory of how many of each type of universe comes from naturalism then we could treat it , in conjunction with an actually observed F, as evidence.” But it is not WE that have to come up with that, it is THEY that do if we are to grant them the claim that fine tuning has been observed (as you want to do).

So, in the end it seems your complaint is that Carrier and I&J have done Collins a grave disservice by not granting him the right to claim the impossible (observe all universes from within our one universe) or to grant that he has done something he hasn’t (provide a fully elaborated account of how many of each type of universes occurs under naturalism).

What you seem to have missed is that I&J have demonstrated that observing F (WAP) (which we can do and have done) can never undermine naturalism (which Collins claims can be done). So, “with an actually observed F” is all we really need to know. You always have to observe F to conclude FT so it is perfectly valid to look at the results of F alone. Since it can’t undermind N there is really no need to go any further. That you are tripping up here can be seen when you write P(N|FT&F&L). What is F doing there? F is a prerequiste for FT right? So it isn’t adding anything and seems to betray an incorrect idea that FT is something you can observe directly within a single universe.

  (Quote)

Tony Hoffman January 8, 2011 at 7:30 am

Einniv, your comment seems exactly right to me, and I think it’s the best explication of why I think the fine tuning problem doesn’t exist.

On the other hand, I have never heard an argument for why fine tuning must be explained that makes any sense to me — the survival of the firing squad analogy, etc., all seem to suffer from the same misapprehension you’ve explicated here.

I remember Luke interviewed that (English?) physicist who thought fine tuning needed an explanation, but I thought he basically asserted that it needed an explanation along the same line that einniv has shown to be fallacious.. Do any non-theists out there think that einniv’s analysis is wrong-headed, and if so, why?

  (Quote)

Eric January 8, 2011 at 8:38 am

Tony Hoffman –
I remember Luke interviewed that (English?) physicist who thought fine tuning needed an explanation, but I thought he basically asserted that it needed an explanation along the same line that einniv has shown to be fallacious.

He was committing more fallacies than that. His analogies all suffered from the same problem: Practically all his analogies showed situations where there was “intrinsic value” to the results achieved. Without “intrinsic value to the universe” life just seems to be the winning phenomena in the “universe lottery” (given that Naturalism means these parameters came about by chance and are of low probability). And if someone wins the lottery, chance is likely the best explanation. The typical response of suggesting an “independently specified target” shows nothing special about the phenomena of life as opposed to any other possible phenomena that could exist in any other possible universe. Life has a predetermined “independently specified target” for given physical constants, but so does any person who wins a lottery. They too each have an “independently specified target” in the form of a lottery number.What makes the case of fine tuning more like his analogy of the person picking the winning the lottery ball and less like the analogy of the “universe lottery”? The winning ball has “intrinsic value” to the game because the game is played with the purpose of winning. So once again we ask “what is the intrinsic value of life to the universe?”

Note: Although the article did not explicitly respond to that specific argument, it responded to an argument by Victor Stenger’s that basically made basically the same point. Also possibly the fact that a lottery was used in both analogies may be a bit confusing.

I’m wondering if this issue can be stated in a Bayesian context.

  (Quote)

einniv January 8, 2011 at 8:47 am

I’m also wondering if someone can answer something else for me.

The whole notion of using a probability distribution derived from a theory, checked against an evidence to come to a conclusion, and then plugging that conclusion back in as evidence to evaluate the original theory seemed rather fishy to me. Is that even OK? I was going to think about it or find out but then I realized that he was definitely wrong in saying you could directly observe FT from within a universe. And that since you always must observe F to conclude FT his objection to using just F was definitely wrong too. So I didn’t bother to find out.

  (Quote)

einniv January 8, 2011 at 9:17 am

Wow. That hurts my brain to think about. There is something wrong with it but is it basically what I already said?, using F twice?, just circular logic since we’ve assumed N? I give up on that bit, glad there was an easier way out.

  (Quote)

Yair January 8, 2011 at 11:56 am

Eric,

I think the biggest problem is that Fine Tuning seems to hold no challenge to the Naturalist model. Nothing in the naturalist model seems to suggest that Fine Tuning would ever be unexpected/expected. If it is merely the prior intuition of physicists from previous models that small changes would only affect qualitative properties at critical points, then observed fine tuning under the standard model should be a challenge to those assumptions under those previous models, not to Naturalism.

Well, under Bayesianism the situation you describe (i.e. “Fine Tuning would ever be unexpected/expected”) translates into a middling probability. Therefore, if one could demonstrate that fine-tuning is probable under theism then fine-tuning would indeed pose a challenge to naturalism as it would lower its relative probability.

However, I think the Bayesian analysis is deeply mistaken. Not knowing does not mean the probabilities are equal, it means that there is no probability. The ability to construe or find theories that fit prior data is not impressive, too – there is a difference between being likely under a theory and explaining the datum.

As for the physicist’s intuition – the problem here is that “naturalism” is ill-defined. Under certain construals of what “naturalism” means these are the kinds of models that are at least the most likely naturalistic models, so this intuition carries some weight in this manner. Regardless, a more thorough analysis is certainly needed.

  (Quote)

Yair January 8, 2011 at 1:12 pm

einniv:

fine tuning is inherently a statement about a probability distribution. That the constants are 1, 2 & 3 don’t tell us anything unless evaluated against the entire set of possibilities. … Assuming naturalism there is a set of possible universes. … we cannot drill down in to each universe and, from within that universe, decide if it is fine-tuned (since you can’t observe all universes from within a single universe).
Becuase of this it is simply not possible for fine-tuned to be evidence since evidence is something you observe

No, no, no.

Think about it – fine-tuning isn’t something theologians made up. This is a real (if contested) physical finding, found by experiments done in our own universe. Not any other universe. It hence has to be something we can point to from our universe, right?

Fine-tuning is the find that small perturbations in the constants of the laws of nature result in universes unsuited for life. It is enough to know constants 1,2 &3 to make this observation. You don’t need to know the probability distributions. Indeed, we don’t know the distributions, and even if we did it wouldn’t change the finding. (I doubt there even are distributions to be known, but that’s another matter.)

The theist wants to say that fine-tuning implies that life-friendly (F) universes are rare amongst naturalistic universes. That requires a whole suite of further assumptions, which is part of the problems of the fine-tuning argument. But even if you allow these further assumptions, it is important not to get confused –

(a) Rarity (P(F|N)) is not equivalent to fine-tuning. It is possible for life-friendly universes (F) to be common yet for fine-tuned ones (FT) to be rare.
(b) Anthropic (F) laws are not equivalent to fine-tuning. A naturalistic universe with life (L) is necessarily life-friendly (F), but it is not necessarily fine-tuned (FT).

I’m also wondering if someone can answer something else for me.The whole notion of using a probability distribution derived from a theory, checked against an evidence to come to a conclusion, and then plugging that conclusion back in as evidence to evaluate the original theory seemed rather fishy to me. Is that even OK?

Well, that isn’t how Bayesianism works.

As I&J constantly emphasize, the probability distribution is Your subjective assessment on how likely a theory is. It isn’t derived from a theory, it is derived from Your subjective opinions (priors). What is derived from the theory is the probability that under this theory for a certain piece of evidence. So the Bayesian receipe is to:

1) Start with a prior probability distribution about the likelihood of theories derived from your subjective opinion (perhaps filtered through past Bayesian steps); P(H1), P(H2)…

2) Check how much a given new piece of evidence, E, is likely under each of the relevant theories; P(E|H1), P(E|H2),…

3) Plug this likelihood P(E|Hn) to re-evaluate the probabilities of the various theories; P(H1) -> P(H1|E), P(H2)->P(H2|E),…

Now, is this process OK? I’d say that generally, no, but in practice, it may be. There are many problems with Bayesianism. The most fundamental, I think, is that it isn’t suited to analyzing situations where probabilities aren’t the right tool, such as ignorance or abnormal hypothesis-relation structures. The most salient, I think, is that it doesn’t take into account other properties of the theories in question, such as being ad-hoc, their simplicity, and so on.

  (Quote)

Yair January 8, 2011 at 1:23 pm

Tony Hoffman,

Einniv, your comment seems exactly right to me … … Do any non-theists out there think that einniv’s analysis is wrong-headed, and if so, why?

Well, I’m certainly a non-theist. Quite a vehement one, really.

With all due respect for einniv, I hope my above post convinced you he’s mistaken about this. Fine-tuning is a fact observed from our very own universe, unrelated to any others. People that say that fine-tuning can’t be observed or that it is 100% likely to be true in a naturalistic universe simply don’t understand what fine-tuning is.

I have never heard an argument for why fine tuning must be explained that makes any sense to me

Well, it’s nice to have explanations for any and all things, and in particular for why the fundamental laws of nature have the features that they have. In this sense having “coarse graining” would carry the same demand for an explanation, but it’s still a good argument for why fine-tuning must be explained IMHO.

However, more specifically, the collective experience of physicists indicates that highly uniform models, like the fundamental field theories discussed here, have the same qualitative behavior when you make small adjustments to the constants (away from critical points, phase transition lines, and so on). This contrasts with the qualitatively different behavior in this one important aspect (being life-friendly, L) for the particular theories being discussed. It seems strange that this particular qualitative property is so sensitive, and I do believe it calls for an explanation.

  (Quote)

Yair January 8, 2011 at 1:37 pm

einniv,

You always have to observe F to conclude FT so it is perfectly valid to look at the results of F alone. Since it can’t undermind N there is really no need to go any further.

No.

Consider, for the sake of example, that all naturalistic L universes are F (WAP) but that NONE are FT; and furthermore, that SOME theistic universes are FT (and therefore F, as you note) and L. As I&J note, taking F as evidence will lead to an increase (or at least not a decrease) in the probability of naturalism. However, noting that FT will lead to the probability of naturalism dropping to 0 and that of ~N rising to 1. FT is not the same as F, and the results of factoring it in depend on Your judgments on how likely FT is under naturalism or supernaturalism; it is definitely not alright to stop at F.

  (Quote)

snafu January 9, 2011 at 3:58 am

I’m still not buying the Yair side of this debate.

Fine-tuning is the find that small perturbations in the constants of the laws of nature result in universes unsuited for life. It is enough to know constants 1,2 &3 to make this observation. You don’t need to know the probability distributions. Indeed, we don’t know the distributions, and even if we did it wouldn’t change the finding. (I doubt there even are distributions to be known, but that’s another matter.)

How can we make any meaningful statements about probabilities if we don’t know anything about the distribution we’re drawing from? (even that the distribution exists???)

If you say “fine-tuning has occurred”, this is a statement that’s inhereently about probabilities and you must be assuming some distribution function, perhaps one of the form

f(x) = uniform[-inf , +inf].

And hence, the probability that (say) constant C = 1.23456789 is very small.

(Yes, I’m perfectly aware that f isn’t a valid density function, but it’s the kind of thing people informally carry around in their minds).

The problem is an epistemic one: can we really assign probabilities based on pure ignorance? My gut feel is to be very cautious of this. Google ‘principle of indifference’ for more.

This is where Jeffreys comes in: there is one appropriately-conditioned probability we can say something about. Under naturalism, the probability of observing life-permitting constants is exactly one.

——–

Note that someone mentioned the firing-squad analogy above as well. It’s beloved of Swinburne, Leslie and Luke Barnes and it’s relevant to point out the disanalogy here.

We have background knowledge of firing squads, their aims, effectivenesses, training, efficiency and likelihood of dying if you end up in front of one. Translated: we have a probability distribution to draw from!

We have no knowledge about constants. They’re constants – just that.

  (Quote)

snafu January 9, 2011 at 4:17 am

—-

On the other hand, I have never heard an argument for why fine tuning must be explained that makes any sense to me — the survival of the firing squad analogy, etc., all seem to suffer from the same misapprehension you’ve explicated here.

I remember Luke interviewed that (English?) physicist who thought fine tuning needed an explanation

—–

Oh…go on…one more comment about firing squads as it never seems to go away. This is from Sober (can’t rember the exact reference, sorry).

The firing squad is a disanalogy for the reasons outlined in the comment above. A better analogy is this:

Imagine you wake up in a room with 5 other people. You’re survivors of some generic “execution mechanism” about which you have no details of its reasons, background or effectiveness. You know there have been at least 5 attempted executions, but the total number could up to be literally unlimited, say 10^40* if we want to pick an arbitrarily large number.

Note that this is just like the constants. No knowledge about where they come from!

Do you now have evidence that the your survival was rigged? I think most people would say no.

Now imagine 4 people. Still no.
Now 3.
So why is it any different for one?

————-

* 10^40 See the McGrews on the resurrection. Bad joke over.

  (Quote)

Tony Hoffman January 9, 2011 at 8:48 am

Snafu, yeah, I agreed that the firing squad analogy is a deception — that’s why I said it didn’t make sense to me.

I believe, though, that there are some non-theists who believe that fine tuning needs an explanation. What I was most curious about was if any non-theists could presentt an argument (that wasn’t obviously flawed, like the firing squad) for why it is that fine tuning is a problem for naturalists. Right now, it just seems like a non-sequitur, like saying: “You know what’s a problem for Naturalism? Snow.” I just don’t get it.

  (Quote)

Yair January 9, 2011 at 12:27 pm

snafu,

I’m still not buying the Yair side of this debate.

La sigh. I don’t get it – it seems so… simple, yet I apparently can’t get you folks to agree on it.

Let me try again. You are definitely correct that

We have no knowledge about constants.They’re constants – just that

We have no information about the probability distribution of the constants, if such a thing can be defined. We do have information about the values of the constants in our theories. We also have information about the structure of these theories; where the constants fit in.

What is the fine-tuning find? What is the actual, empirical, finding? It is that if you take the same theories but use slightly different values for the constants, you don’t get life, and often indeed get virtually nothing complex at all. That’s the actual fact of the matter about what is found.

Note that there is no concern here about what values are probable or possible. This is purely a mathematical exercise that doesn’t take such considerations into account.

Now, this is the empirical finding. This is not how it’s necessarily interpreted; people can misunderstand it. For example,

How can we make any meaningful statements about probabilities if we don’t know anything about the distribution we’re drawing from?

Statements about the probabilities of the constants or universes with them? You can’t. You need to make a further assumption about their probability distributions to talk about how likely these values are. And you’ve better explain what these probability distributions are supposed to represent, too – if they’re just your subjective opinion about what is possible under naturalism, then it doesn’t really seem like they’re very meaningful.

But fine-tuning does not, by itself, require this further assumption. It is en empirical find about our own current physical theories, that is correct regardless of what other universes exist or can, in any sense, exist.

If you say “fine-tuning has occurred”, this is a statement that’s inhereently about probabilities and you must be assuming some distribution function,

Perhaps this is a linguistic problem?

Saying “fine tuning has occurred” implies that “fine tuning” is an action, something someone has done. This is not the scientific finding, despite its name. The actual find is that the theory is sensitive to small changes. It is a passive statement about the fact that the theory is unstable. Fine-tuning has not occurred, it was observed that the theory is fine-tuned.

perhaps one of the formf(x) = uniform[-inf , +inf].And hence, the probability that (say) constant C = 1.23456789 is very small.

Something like this is indeed apparently what the theist assumes. But it’s important to understand that this isn’t really the scientific find. At all.

there is one appropriately-conditioned probability we can say something about.Under naturalism, the probability of observing life-permitting constants is exactly one.

Yes. But this says nothing about the probability of finding that life would not be possible under the same physical theory with slightly different constants. Nothing at all.

  (Quote)

Eric January 9, 2011 at 5:01 pm

@Yair
It sounds more like you are saying that Fine Tuning is rare among cosmological models. But it is certain under the standard model. So this may be a relevant question: Does every possible naturalistic universe have to conform to the same cosmological model?

  (Quote)

Yair January 10, 2011 at 5:03 am

Eric,

It sounds more like you are saying that Fine Tuning is rare among cosmological models. But it is certain under the standard model. So this may be a relevant question: Does every possible naturalistic universe have to conform to the same cosmological model?

That’s an interesting way of putting it.

I don’t know if fine-tuning for life is rare among cosmological models, but I think our experience with other (mostly toy-) models does seem to indicate that at least complexity is. As for your question, I believe the answer is clearly “no” – one can imagine naturalistic universes that don’t conform to the standard cosmological model, and indeed ones without universal laws of nature at all.

  (Quote)

Eric January 10, 2011 at 4:30 pm

Yair –
I believe the answer is clearly “no” – one can imagine naturalistic universes that don’t conform to the standard cosmological model, and indeed ones without universal laws of nature at all.

Then I believe i have an answer to your earlier question:


He won’t produce an explanation for why fine-tuning would be observed in a naturalistic universe.

1. All naturalistic universes must conform to the Standard Model of Cosmology.
2. Under the standard model of cosmology, a universe if life friendly if and only if it must also be fine tuned.
3. If a universe is not life friendly, no one will be around to observe it.
4. from 2 and 1: If a naturalist universe is not fine-tuned, no one will be around to observe it.
5. We observe a universe
6. From 4 and 5: Because we observe a universe, it must therefore be fine-tuned

The point it, as soon as we realized all naturalistic universes must conform to the Standard Model, any discovery under the standard model must have a prior probability of 1 for naturalistic universes. Maybe I am misunderstanding some part of Bayes Theorem but it seems as obvious to me as saying:
“If a reality exists that follows mathematical axioms then the prior probability that 2 + 2 = 4 is 1.”
Now this is a purposely oversimplified example meant to show that any discoveries that MUST follow from a certain model or set of axioms must have prior probability of one, since they MUST follow.
Now I know this is not what Richard Carrier is saying, but it may follow from what he’s saying.

  (Quote)

einniv January 15, 2011 at 9:01 pm

(I’m not sure why the preview is butchering the formatting and showing some lines as small. Hopefully it is still readable)
Where we have differences seems clear but I want to see where we agree. I want to see if we all agree with what I&J call their main theorem, as I lay it out below. In other words, forget all the other stuff at the I&J link and forget fine-tuning for a minute and look just at their main theorem. I may have more to say on the other stuff but I want to start here. Does anyone disagree with any part of what follows? Are any of the definitions, assumption, logical arguments or conclusions in the theorem incorrect or objectionable?

————
I&J Main Theorem
Given the existence of life, and the weak anthropic principle (WAP), the discovery that the universe is life friendly can never undermine naturalism.

Definitions & Assumption
Life Friendly (F): A universe is life friendly if and only if the conditions in that universe (laws of physics, values of constants if relevant, etc) are such that life can arise through the operation of naturalistic laws

(L): The universe exist and contains life

(N): The universe is governed solely by naturalistic law

WAP: L cannot be true in a universe that is N unless that universe is F
or N&L ==> F (‘==>’ means logical implication)
or P(F|N&L) = 1

0 < P(F|L) < 1 An assumption that the probability of F after observing L is greater than 0

Theorem
P(N|F&L) = P(F|N&L) P(N|L) / P(F/L) (Bayes’ Theorem)
= P(N|L)/P(F|L) (from definition of WAP where P(F|N&L)=1)
>= P(N|L) (from the assumption that 0 < P(F|L) < 1)

Conclusions
P(N|F&L)>=P(N|L) shows that observering F can never undermine N and can possibly support it.
Since P(~N|F&L)=1-P(N|F&L) and similarly for P(~N|L), it follows that P(~N|F&L)<=P(~N|L) or

Corollary: Since P(~N|F&L)=1-P(N|F&L) and similarly for P(~N|L), it follows that P(~N|F&L)<=P(~N|L). In other words, the observation F does not support supernaturalism (~N), and may well undermine it.

  (Quote)

einniv January 15, 2011 at 10:42 pm

Fine Tuning (FT) and Life Friendly (F)

Assumptions
Fine Tuning (FT): for the sake of argument only let’s say FT is observable without being specific about how that might be done, or even what it means to observe it. just that it is observable and possibly (but not definitely) independent of F

Life Friendly (F): A universe is life friendly if and only if the conditions in that universe (laws of physics, values of constants if relevant, etc) are such that life can arise through the operation of naturalistic laws

By subset I mean the standard meaning of “wholly contained within”.

Anything that is true for all members of a set is true for all members of any subset

Assume FT universes are a subset of F universes
This is what I was arguing. That FT is in fact not independent of F. Not surprisingly, if you reject my definition of fine-tuning, you might be able to disagree with this somehow but, let’s assume it is true.

1)Universes where FT is observed are a subset of universes where F is observed (assumption)
2)observing F can never decrease the possibility of N (given L & WAP) (i.e. in every universe where F, L & WAP are observed, the possibility of N will not have been decreased) (by I&J’s Main Theorem)
3)observing FT can never decrease the possibility of N (Anything that is true for all members of a set is true for all members of any subset)

Conclusion
If you are arguing that observing FT can change the outcome of I&J’s main theorem then, you are arguing that universes where FT is observed are NOT a subset of universes where F is observed (or that I&J’s main theorem is wrong somehow).

Comments
I agree with you that there can be F universes that aren’t FT. This is true whichever definition of FT you accept (mine & I&J’s or yours). But it has no relevance. My argument wasn’t based on that not being true. So, unless you disagree with my conclusion above, what would a universe where FT is observed but F isn’t look like? How can there be such a thing? Certainly not in your definition of FT. You say:

Fine-tuning is the find that small perturbations in the constants of the laws of nature result in universes unsuited for life.

Small perturbations from what? Perturbations from values where the universe is suited for life (i.e. from where the universe would be Life Friendly(F)). You could still find that you are in a universe that is neither F nor FT or, a universe where F is true but not FT but, by your own definition, you could never find a universe that is FT but not F! To say it another way, if a universe is not F then there is nothing to have small perturbations from (be sure the re-read the definition of F if this isn’t making sense).

Your concerns about what is being “ignored” do not seem well founded.

  (Quote)

einniv January 15, 2011 at 10:53 pm

Small error:

0 < P(F|L) < 1 should instead be 0 < P(F|L) <= 1

  (Quote)

Yair January 16, 2011 at 1:23 am

einniv,

I have no quarrel with your first part, although I’d note that treating all non-naturalistic (or, for that matter, naturalistic) theories as a single group is unreasonable. But I’m willing to work with that. The point where I think you’re in error is point (3) in your second post; I don’t see how it follows.

I want to offer a counter-example. Remember that Bayes Theorem is a theorem in probability theory. So it’s really best to have a concrete example before us. Let us then, for the sake of the counter-example, consider the sample space as consisting of 100 theories on reality, 50 of them naturalistic (N) and 50 of them not (~N). Let’s just imagine that these cover all the theoretical possibilities we can think of. Furhermore, let’s make the simplifying (and arbitrary) assumption that we a priori consider all equally probable, i.e. the prior probability of each is 0.01. In this case we begin indifferent to the main question, P(N)=0.5.

Now, let’s say that half these universes, from both groups equally, have life (L). So there are 25 naturalistic universes with life and 25 non-naturalistic universes with life. Clearly, P(N|L)=0.5.

Now let us further consider life-friendly universes, F. By the WAP, P(F|N&L)=1, i.e. all 25 life-bearing naturalistic universes are also F. Let us assume, for the sake of example, that only 10 [less than half] of the L non-naturalistic universes are life-friendly. Now we have a naturalism-to-non-naturalism ration of 25:10 within life-friendly and life-bearing universes, or in other words upon learning that F&L we can conclude that P(N|F&L)=0.71>P(N|L). This is in accordance with J&J’s main theorem; indeed I’ve chosen the number such that the probability for naturalism is substantially increased upon learning that F.

Now let us consider fine-tuning, FT. I grant you your assumption (1), i.e. that fine-tuning is only observed in universes that are also life-friendly. Note that we’re already only considering universes that are F&L, so are in particular F, so this assumption is not relevant. All it means is that there won’t be FT except within the sample we’re already considering (except perhaps some F universes that aren’t L), but we don’t really care about those universes for the rest of the analysis.

However, let us now for the sake of the counter-example make the following assumptions about how FT is spread amongst out F&L universes:
a) There is only one naturalistic universe that is FT amongst the 25 considered ones.
b) Almost all (9 out of the 10 considered) non-naturalistic universes are FT.

Under these assumptions, there are only 10 FT&F&L universes, each with equal probability. Only one of these is naturalistic, so the probability of naturalism is only 1:10, i.e. P(N|FT&F&L)=0.1. This is a sharp decline from the P(N|F&L)=0.7 we had before. So clearly observing FT can lower the probability of naturalism, even though FT universes are a subset of F universes.

Of course, the reason we got a lowered probability for naturalism is that we made the pair of assumptions we did about how FT is spread amongst relevant naturalistic and non-naturalistic theories. We could gain opposite results with other assumptions. Hence my emphasis on the importance of arguing for the likelihood of fine-tuning under both theism and atheism (the two relevant theories that are really being considered here; no one is seriously considered all “non-naturalist” theories!).

There are also the concerns that the entire Bayesian analysis in inapplicable here, or that a more sophisticated one is called for. But within this level of analysis at least, considering fine-tuning requires making assumptions about how likely are FT universes within F&L universes under scientific-naturalism and under monotheistic-theism.

  (Quote)

einniv January 16, 2011 at 8:41 am

Hopefully one thing we can all agree on is that the I&J webpage is a mess. I get the impression it is essentially a USENET archive. More structure would make it much easier to discuss.

The point where I think you’re in error is point (3) in your second post; I don’t see how it follows.

That’s fine. I think it is wrong too. I was going somewhere with it but, you went another direction, which is fine. The proper 3 is simply “when FT is observed the prior and necessary observance of F (given L & WAP) cannot have undermined N”.

Let us then, for the sake of the counter-example, consider the sample space as consisting of 100 theories on reality, 50 of them naturalistic (N) and 50 of them not (~N)

and

Now, let’s say that half these universes, from both groups equally, have life (L).

You backed up too far again. We know L is true. You are trying to assume prior probabilites for the situation where we don’t know what we actually do know. The only way you can do that is if what we do know logically implies (retrodicts to use I&J’s term) something about those probability values. So, regarding N vs ~N you can’t do that. From I&J the first instance that says ‘added 010612′

Please remember that if You are a sentient observer, You must already know that L is true, even before You learn anything about F or P(F|N). Thus it is legitimate, appropriate, and indeed required, for You to elicit Your prior on N versus ~N conditioned on L and use that as Your starting point. If You then retrodict that P(~N)<<1 as a consequence, all You are doing is eliciting the prior that You would have had in the absence of Your knowledge that You existed as a sentient observer. This is the only legitimate way to infer Your value of P(~N) unconditioned on L.

So you can’t do the calculations you are doing since they rely on an invalid way of arriving at P(N)=.5 and P(~N)=.5 . All is not lost. I think you can make your argument without doing that. I’ll add more later but I need to sign off for now.

(more later)

  (Quote)

Yair January 17, 2011 at 12:48 pm

An interesting argument against fine-tuning has appeared in the physics preprint archives,

http://arxiv.org/pdf/1101.2444v1

It essentially argues that the cosmological constant is not fine-tuned to maximize life, because other values will produce more life; specifically, a small negative value is desirable. There appear to be some holes in the argument, but it’s a direction of thought I haven’t considered before.

Interestingly, the writer is a theist and writes that while this is evidence against fine-tuning for life it is, for him, evidence for a god that wants to create multiverses… Truly, any and all evidence can point to some kind of god…

  (Quote)

Luke Muehlhauser January 17, 2011 at 5:52 pm

Yair,

Great link!

  (Quote)

einniv January 18, 2011 at 7:43 am

There was a more subtle error in my comment on “Fine Tuning (FT) and Life Friendly (F)”. Unfortunely a blunder in step 3 obscured it (in fact that whole proof was unnecessary and I don’t remember why I thought otherwise). The more subtle problem was that I wasn’t using F in the way I defined it in the assumptions. Under that definition F is the laws and actual observed values of constants. It is not merely that the constants could take on appropriate values. It is that they do take on appropriate values. Because of that the assumption that FT is a subset of F is not warranted. I was hoping in working through why you would realize that, despite protestations to the contrary, FT is indeed about probability distribution, even under your definition.

The answer to my question how can there be FT without F is that you could discover the laws of physics only allow a small range of values that will produce life but that the universe, despite having life, doesn’t actually take on those values.

So lets get slightly more formal about what you are saying FT is:

Within a universe there are laws of physics (Y)
Within each possible Y there are fundamental constants that can vary unrestricted
Given Y a set of possible universes can be generated by varying the constants
In either some, all, or none of the universes in the set it will be possible for life to arise naturalistically
If the number of universes where life can arise naturalistically is small compared to the total number of universes in the set, we say that Y is fine tuned (FT)
Or, in other words, FT means P(Life|Y) is greater than zero but much smaller than 1 (I can’t write this as an inequality because it is freezing up the comment preview).

I think this is what you have been saying Yair. Is that a fair representation?
One obvious problem with that is the question of just how much smaller than 1 are we talking about? You are wanting to treat FT as binary variable in your discussion of how FT is distributed through the set of naturalistic universes. How can we do that? If P(L|Y1) = 0.05 and P(L|Y2) = 0.10 how can we turn around and treat them as the same thing when discussing the distribution of FT. You can’t. If you choose, say, 0.10 as your cut off for FT then what about all the Yx where P(Life|Y) = 0.11 . Hopfully it is obvious that this is arbitrary and unworkable.

Now, I’m not sure if that refutes or supports what you are saying ;-) . I think it is definitely contrary to some of what you have said during the thread .Specifically that FT is something that can be observed as a binary variable. On the other hand it seems to support the assertion that the fine tuning argument, as you define it, is not amenable to Bayesian analysis. But I&J were responding to a specific argument so that hardly seems a fair complaint.

What’s more, in the simplified case they say they are examining, where only our particular laws of physics are considered (and thus naturalism is assumed to only give rise to those laws), P(F|N) much smaller than 1 (given the existence of life as background knowledge) is equivalent to P(Life|Y) much smaller than 1 . Because in that case Y and N are basically synonymous and, given life, the probability of life friendliness should be the same as the probability of life under those laws. (I’m not so sure about this last bit but it seems right to me)

  (Quote)

Yair January 19, 2011 at 1:13 am

einniv,

I’m afraid it seems we haven’t made much progress. You’re still not getting my main point – what fine-tuning is. I’ll dispense with some minor quibbles to get to that point.

So lets get slightly more formal about what you are saying FT is:
Within a universe there are laws of physics (Y)
Within each possible Y there are fundamental constants that can vary unrestricted
Given Y a set of possible universes can be generated by varying the constants
In either some, all, or none of the universes in the set it will be possible for life to arise naturalistically
If the number of universes where life can arise naturalistically is small compared to the total number of universes in the set, we say that Y is fine tuned (FT)
Or, in other words, FT means P(Life|Y) is greater than zero but much smaller than 1 (I can’t write this as an inequality because it is freezing up the comment preview).I think this is what you have been saying Yair. Is that a fair representation?

No, I’m afraid not.

Within each possible universe (i) there are certain laws of nature (Yi), which include constants (Ci). In order for them to have objective physical meaning, these constants should be phrased in dimensionless units, so let’s assume they are.

It is possible to consider an alternative universe (j) with the same laws of nature (Yi) but constants that differ from the universe’s constants in a very specific way: all the constants are the same, except for one, which is slightly altered. We’ll mark such changes as (Cij).

Now consider a universe (i) that contains life-bearing laws of nature and also life. It has the physical laws (Yi) and constants (Ci). Fine tuning is the find that (Cij) is not life-bearing. That’s all.

This is why fine-tuning is a binary property of the universe – it is true for that universe (i) that small perturbations of one constant will lead to non-life-bearing universes.

So…

“Within a universe there are laws of physics (Y)”
Yes. Universe i has laws Yi and constants Ci.

“Within each possible Y there are fundamental constants that can vary unrestricted”
We can consider imaginary universes with the same laws Yi but different constants Ci. Whether these exist or not is another question. The constants certainly cannot vary within the universe; they only “vary” in our imagination, and we can restrict the variance of the constants – or not – as we wish in our imagination.

“Given Y a set of possible universes can be generated by varying the constants”
Yep. Possible in the “we can model it” sense. Note that you are talking here about any arbitrary change in the constants, which we can mark Cik. Above I talked about the more specific set of changes: a small change to one constant, Cij.

“In either some, all, or none of the universes in the set it will be possible for life to arise naturalistically”
Yep.

“If the number of universes where life can arise naturalistically is small compared to the total number of universes in the set, we say that Y is fine tuned (FT)”
Nope. Well, some may say so, but they don’t understand the scientific finding.

Yi is fine-tuned is small perturbations of on of its constants (i.e. Cij) lead to imagined universes that are not life-bearing. It is possible that indeed the domain of life-bearing is small, i.e. that only few choices of constants Cik will result in life-bearing and life-friendly universes. However, this is not the actual find; we’re unable to conduct such calculations at the moment. Perhaps one day what you’re saying here will be established, but so far it hasn’t been.

“Or, in other words, FT means P(Life|Y) is greater than zero but much smaller than 1″
No again. And for three reasons. First, and this is the main point, the actual find is about how stable Ci is to small perturbations, not about how common life is in Cik. It’s a totally different claim than the one established.

Secondly, as I said directly above, we just don’t know if the distribution of Cik has only a few places which are life bearing. We just don’t know yet, and any claims to the contrary are premature.

But also, the distribution of life under Cik is NOT P(Life|Yi). This is because you need to take another step to reach a probability estimate, you need to establish your “probability measure” as well as your “probability space”. You need to say which values are probable a priori and which are not. The fact that you can imagine a universe with a certain set of values doesn’t mean it is really possible or merits considerations.

Finally, even if we did agree on a probability measure and did conclude that P(Life|Yi) is low, this would still be a property of the laws of nature Yi, not of our probability estimates regarding theism or atheism. You would still need to explain why under naturalism (or theism) we would expect to find ourself with laws with this feature, in order to apply Bayes theorem. The rules of nature we find in our own universe and their properties are evidence; in order to apply Bayes theorem, you have to determine the likelihood of this evidence under the relevant hypothesis (in this case – scientific naturalism and monotheistic theism).

Yair

  (Quote)

einniv January 19, 2011 at 5:14 am

Yair,

Please understand that I do get many of the points you’ve made. I was just hoping that maybe we could all agree that the I&J stuff is valid, at least for what they were trying to do. It seems to me that in this post, however, you are just trying to do some slight of hand. How stable Ci is to small perturbations is just another way of saying that when you vary Ci only a small percentage of the results end up being life capable. “small perturbations” and “slightly altered” are totally ambiguous statements. If we are saying there is one and only one value for Ci that makes for a life compatible universe then , yeah, fine maybe you could treat that as a binary variable , but that isn’t the case and that certainly hasn’t been observed or even claimed by anyone. The only way to make this claim concrete is to frame it the way I laid it out as a probability. Otherwise it is just an empty assertion.
Anyway, I’ve enjoyed the discussion, learned some things, and generally enjoyed browsing around a new site. Thanks for the link to the physics paper.

  (Quote)

Yair January 19, 2011 at 10:54 am

I guess we’re departing with a disagreement, then, and that’s unfortunate. But I enjoyed the exchange.

I would leave a last word, however, because of that “sleight of hand” accusation. I won’t bother with a detailed reply, but I would say this – there is none.

Cheers, and have fun and wisdom.

Yair

  (Quote)

einniv January 19, 2011 at 2:39 pm

Yeah I guess we will have to part that way. What I find frustrating is that you took the time, that I didn’t, to lay out exactly what I had in mind and still don’t see it. I guess I’ll try one more time just so you see where I am coming from.
I’ll bold the sleight of hand parts.

Within each possible universe (i) there are certain laws of nature (Yi), which include constants (Ci). In order for them to have objective physical meaning, these constants should be phrased in dimensionless units, so let’s assume they are.

It is possible to consider an alternative universe (j) with the same laws of nature (Yi) but constants that differ from the universe’s constants in a very specific way: all the constants are the same, except for one, which is slightly altered. We’ll mark such changes as (Cij).

Now consider a universe (i) that contains life-bearing laws of nature and also life. It has the physical laws (Yi) and constants (Ci). Fine tuning is the find that (Cij) is not life-bearing. That’s all.

This is why fine-tuning is a binary property of the universe – it is true for that universe (i) that small perturbations of one constant will lead to non-life-bearing universes.

If you told a physicist to look for “small” perturbations surely the first question he is going to ask is what does small mean right? In using “small perturbations” you are contrasting it with saying “large perturbations” or instead saying “any perturbation at all”. If you really mean to say “any perturbation at all” then I would just point out that no theist (or anyone else) is making that claim (and it certainly hasn’t been observed) and call it a day. Certainly “small” implies something like this cii = life compatible, cij = life compatible, but cik, cil, cim, cin, cio, cip = not life compatible. Or think about if it was reversed; k-p are life compatible but cii & cij aren’t. If you start on cik and “slightly alter” it to cij then you get no life. Is that Fine Tuned? After all, most states will produce life right? I’m sure you don’t mean to say that fine tuned just means that all states of ci have to be life compatible right? So, for “small” to have any kind of objective meaning you must be saying that the ratio (number states where ci is life compatible)/(total number of states of ci) is small. In other words a probability! Or P(ci is life compatible) < 1 (much less, damn comment preview). But this will never be binary it will be a matter of degrees. You might add cik to the life bearing ones in my first example but still call it fine tuned.

Ok. So there is that.
=================
When I was talking about only using our physics my point was this. I&J did that because theists say that is what they want. They think considering a different law of physics is cheating. So they gave them their best case and looked at that. Once you do that you have to be considering naturalism where only our current laws can exist. It is just part of giving them what they asked for. My point about P(F|N) then is, given only one set of possible laws, and that life exists, P(F|N) is equivalent to the ratio I discussed above, (number states where ci is life compatible)/(total number of states of ci), or P(ci is life compatible) < 1 (much less) . This is because we don't have to worry about the cases where life didn't actually arise despite it being life compatible, or the fact that the ratios might be different if we could consider different sets of laws.
So all that is just to say, for what they are looking at, P(F|N) much less than 1 is the proper way to model even your definition of fine tuned. Now, I'll doubt you'll agree but I am going to leave it at that this time.

Thanks again for the discussion. Take care.

  (Quote)

einniv January 19, 2011 at 3:36 pm

Since this thread is like crack for the mind, and I think you may answer again, I thought I would ask for a different approach. Imagine this fantastical hypothetical:

We both are very very rich and have a huge team of physicists at our disposal. And racks of computers faster than anyone can imagine right now.

You write me with some very exiting news. Your team has reduced all of physics down to one equation with exactly one constant that can vary from 0 to 1. They’ve also calculated the value of that constant for our universe. (It’s 0.51 but that’s not important ;-) . And, they’ve written a program to model that one law over billions of years.

You have some bad news though. You tell me, unfortunately for naturalists, we have discovered this law is fine tuned for life. You ask me to confirm your finding.

What specifically will I be looking for with respect to this one constant?
Keep in mind if you tell me to look at what happens with “small perturbations” or to “slightly alter” it , I am going to ask you what that means exactly. What specifically am I looking for and, I’m going to ask you to be precise about what “small” means. Otherwise I can’t confirm your statement. And, I’m going to ask you if your definition of “small” is subjective and arbitrary or if it has some objective meaning.

  (Quote)

Yair January 20, 2011 at 2:28 am

Hello einniv,

I was going to leave this thread but since you implied you would like my response I will answer your question. In your imagined scenario I would, frankly, tell you to just say what I said to the physicists. As a (real-world) physicist, I assure you they would know what I mean. Physicists have been doing perturbation analysis for centuries and I’m confident any physicist would understand what the relevant scale is and what I mean by “small”. There is really no need to be more specific. The only complication is that we’re dealing with a non-linear phenomena, so the standard linear response theory won’t work; but this too will be perfectly understood by any physicist that you will put on the problem.

We can get more formal and precise, if you want. In your simple example, the scale is set by the value of the one constant, X=0.51, which is on the scale of 1. “Small” implies two orders of magnitudes smaller, so a perturbation on the scale of 0.01. Your scientists will therefore look to see such deviations from X will lead to non-life. This is a qualitative test, so if they’ll discover that, say, X>0.54 and X<0.46 the statement "small perturbations lead to non-life-bearing universes" would still be correct; if they discover that even smaller perturbations still ruin life, the statement would also still be correct; if they'll discover that only large perturbations (say, on the scale of X itself, e.g. X0.9) ruin life, they’ll say my statement was in error; if they’ll discover that very small perturbations (say, X within 0.51+/-0.001 but excluding X=0.51+/-0.0001) indeed ruin life but that beyond this area around the actual X all regions are supportive of life they would triumphantly proclaim my analysis to have been insufficient – they’ll agree that relatively small perturbations can ruin life, but say that this is an unstable point in an otherwise life-bearing law of nature; if they discover that there is a fractal or non-analytic structure to life-bearing universes across X([0,1] they’ll likewise conclude that a perturbative treatment is inapplicable.

And this is just with the simplistic toy example….

In short, concepts like “small perturbations” convey certain particular meanings within the physicist’s discourse, which we are referring to here. They convey the idea that there is a relevant scale that is demonstrably set by the problem for any physicist considering it. It conveys the idea that the quantity is continuous on the relevant scales so that it can be treated as a Real number. It conveys the idea that it is useful to discuss small perturbations, so that the phenomena too is “nice” enough – not some non-analytic beast that behaves differently at arbitrarily close points. And so on.

There is really no need to delve into all of that. It is enough to keep the particular problem in mind, and understand that the physicists say that surprisingly-small changes to the value of each constant seem to lead to non-life-bearing universes. It is important to note that they also say that (a) it is difficult to impossible to evaluate different combinations of changes in the constants, or other constants or laws of nature, (b) it is impossible to know what changes or combinations of changes make sense within a more fundamental theory and which ones are just mathematical models with no physical or rational validity, and (c) this sensitivity to small changes is a mathematical property of the laws and values that we established experimentally, irrespective of the above two points.

Point (b) is important to understand why the distribution doesn’t matter. Suppose we discover the Theory of Everything (TOE), which has no free parameters whatsoever and that which shows that the values of the constants we now have in our effective theory are necessary – it doesn’t make sense to talk about other constants, they are physically impossible. I don’t believe it will work out that way, but suppose. Then according to your interpretation it will imply the scientists were wrong to find fine-tuning. But that’s false. Point (c) would still hold, the effective rules we do have now will still be fine-tuned. The TOE might be able to furnish an explanation, by mathematical derivation, for why the effective laws of nature at our current levels exhibit this mathematical property, or it might not – but it won’t erase this mathematical property.

To return to your toy example, suppose you come up with an alternative theory without any free parameters at all. And you show me this result, and how it leads to my theory with X=0.51 as an effective theory in some limit. I can still ask – yeah… but why does life only exist around X=0.51 in my theory? I can still look to other values of X even when I know they are not physically admissible under your theory. Perhaps, at that point, this question will not really interest you, since you’d be dealing with universes that are no physically possible. I want to say two things about this: (1) the point is that these are two separate questions, and therefore the distribution of possible values is irrelevant to fine-tuning, not whether the question will remain interesting; (2) considering values for the constants other than what they are is, to the best of our current knowledge, just as physically impossible.

I would like to end in a somewhat general note – you want to get formal, so let me be a little more formal to demonstrate the futility of assigning probabilities to these issues. What is the probability of life under naturalistic universes? Well, we can consider each universe to consist of a Lagrangian detailing its laws of motion, and boundary conditions describing the actuality. Suppose we could decompose each Lagrangian in a standardized way into a series of constants. A Lagrangian (a) will hence have a series of N_a constants in it. Let us now consider all a-class Lagrangians L_a. Clearly a is a dense index as it covers a space larger than function space. Each universe has boundary conditions as well as laws of nature, so actually we need to consider an infinity of boundary conditions b that go with this laws, so our universes have indexes “ab”, with b a continuous dense parameter that somehow covers all possible boundary conditions on a-type universes. Within each such ab combination we want to consider different constants, so we need a third dense parameter c that will cover all possible constants within an ab-type universe. Furthermore, within each universe abc there are many parts and our “world” in the many-world -interpretation or chance freezing of constants since the big bang in a Copenhagen-interpretation will also need to be stated, as will our “part” in multiverse theories, so it appears yet another dense infinity index is needed, d. So the range of all possible naturalistic universes is (at least) a four-fold-infinity – a naturalistic universe is characterized by the number abcd: the laws of nature it has (a), the boundary conditions (b), the constants (c), and the part (d).

Now – what is the probability for life or fine-tuning or whatever to arise naturally within this set of universes? We are dealing with multiple infinite dense indexes, and those can’t be counted; attempting to apply the principle of indifference will result in contradictions. So we have no objective way of coming up with prior probabilities. We also have no feasible way to calculate what some given universe in this incomprehensibly-big set of possible universes implies; we can barely calculate the mass of a proton in our own universe from the fundamental laws, let alone derive the emergence of life in some arbitrary fiendishly-complex physics!

The set of possibilities is just too large, too complex, and too dense for us to make sense of it. There is no way to meaningfully assert “fine-tuning” as a property of the distribution of naturalistic universes, since we know essentially nothing about this distribution.

  (Quote)

einniv January 20, 2011 at 6:47 am

I get that the real world is much more complex than the hypothetical but, if we can’t even define fine tuned in a objective manner for the simple case then granting someone the claim in the more complex one seems rather silly. Remember it was you that wanted to treat fine tuning as evidence not me. I was trying to demonstrate why that is wrong and I think we’ve done that at this point.

In the simple example, fine tuned is arbitrary, e.g. 2 orders of magnitude. Your discussion of “small” within physicist discourse reminds me of the “I know it when I see it” definition of obscenity.

It is also not binary (and thus usable as evidence). The condition can be modeled as a probability however. Saying 2 orders of magnitude is the same thing as saying P(x being life capable) <= 0.01

RE: not binary, consider this scenario within our hypothetical:

My team discovers a programming error in your team's simulation software. They used the wrong type of variable and got a rounding error (your programmers agree they goofed). It turns out life is supported in +/- .011 of possible values. Do we really want to say it is now NOT fine tuned? Fine tuned as you are using it is relative. We would really want to say it is less fine tuned than your original estimate.

If instead we discovered a bigger programming error and found the range was more like +/- 0.0001 then it is silly to say this is the same finding. By your arbitrary definition we would still say it is fine tuned but, it is much much more fine tuned than you had supposed. "More fine tuned than" and "less fine tuned than" does not indicate we are dealing with a binary variable.

Hopefully it is clear now why we can't use "fine tuned" as you've defined it as evidence in a P(A|B) analysis.

  (Quote)

Yair January 20, 2011 at 7:14 am

einniv,

I get that the real world is much more complex than the hypothetical but, if we can’t even define fine tuned in a objective manner for the simple case then granting someone the claim in the more complex one seems rather silly. Remember it was you that wanted to treat fine tuning as evidence not me. I was trying to demonstrate why that is wrong and I think we’ve done that at this point.

But we can demonstrate it in the simple example just as we do in the real world – by treating it as a property of the universe, rather than the distribution of “possible” universes. What we can’t demonstrate is what you insist is fine-tuning, not what I’m saying it is.

In the simple example, fine tuned is arbitrary, e.g. 2 orders of magnitude. Your discussion of “small” within physicist discourse reminds me of the “I know it when I see it” definition of obscenity.

Come on. It’s a qualitative technical expression that any physicist would agree to. Why are you picking on such minor details? Why does it matter what the precise meaning of “small” is when everyone agrees that this is a good word to use to describe the relevant considered changes of the values of the constants? This is not where the dog is buried. It’s just a minor language problem, get over it.

It is also not binary (and thus usable as evidence). The condition can be modeled as a probability however. Saying 2 orders of magnitude is the same thing as saying P(x being life capable) <= 0.01RE: not binary, consider this scenario within our hypothetical:My team discovers a programming error in your team’s simulation software. They used the wrong type of variable and got a rounding error (your programmers agree they goofed).It turns out life is supported in +/- .011 of possible values. Do we really want to say it is now NOT fine tuned? Fine tuned as you are using it is relative. We would really want to say it is less fine tuned than your original estimate.If instead we discovered a bigger programming error and found the range was more like +/- 0.0001 then it is silly to say this is the same finding. By your arbitrary definition we would still say it is fine tuned but, it is much much more fine tuned than you had supposed. “More fine tuned than” and “less fine tuned than” does not indicate we are dealing with a binary variable.

Is my cat not fat because he isn’t enormously fat? Words can have vague meanings, you know, and still be descriptive.

In short, what we have here is me talking about fat cats and you complaining that I can’t do that because “cat” is an arbitrary word and “fat” is not a binary property. Yeah, right. Really, these are linguistic matters of no consequence.

Hopefully it is clear now why we can’t use “fine tuned” as you’ve defined it as evidence in a P(A|B) analysis.

No, I’m afraid it isn’t.

  (Quote)

einniv January 20, 2011 at 7:22 am

I think it should be clear to anyone at this point that when you said fine tuning could be treated as evidence , i.e. as B in P(A|B), that you were wrong but, you refuse to see it. So, I give up. Thanks again. Take care.

  (Quote)

einniv January 20, 2011 at 7:29 am

And, by the way, it is NOT a linguistic issue. We translated it into the language of math (e.g. +/- 0.01) and the issue still persists. Arrrrgggg. Anyway, I guess we’ll just have to agree that one of us is wrong and, I think to anyone following the discussion it should be clear at this point which one of us that is.

  (Quote)

Zeb January 20, 2011 at 8:55 am

I just want to thank Yair and einniv for this thorough and detailed discussion. FWIW I am convinced by Yair that fine-tuning is an empirical observation that does not depend on any kind of probability, and that can be treated as evidence in Bayesian analyses. But Yair, I am correct in understanding that are you saying that because we lack justification for assigning probabilities for FT given naturalist vs FT given theism, we can’t consider FT to be evidence for anything in particular at the moment? I take you as saying that FT is a fact that could potentially be used as evidence if we could come up with probabilities related to it, and einniv as saying that FT is not a fact and therefore not even potential evidence. If that is correct, I think Yair has proven his point pretty thoroughly.

  (Quote)

Yair January 20, 2011 at 10:49 am

Einniv,

Anyway, I guess we’ll just have to agree that one of us is wrong

I’m afraid so. Thanks for the discussion anyways, and I hope to have more productive ones in the future :)

Zeb,

I just want to thank Yair and einniv for this thorough and detailed discussion.
Thanks for tuning in.

But Yair, I am correct in understanding that are you saying that because we lack justification for assigning probabilities for FT given naturalist vs FT given theism, we can’t consider FT to be evidence for anything in particular at the moment? I take you as saying that FT is a fact that could potentially be used as evidence if we could come up with probabilities related to it

That is indeed precisely my position. Well, within the Baysian framework at least.

  (Quote)

Leave a Comment

{ 3 trackbacks }