CPBD 051: Neil Manson – The Design Argument

by Luke Muehlhauser on June 30, 2010 in Design Argument,Podcast

(Listen to other episodes of Conversations from the Pale Blue Dot here.)

Today I interview philosopher Neil Manson. Among other things, we discuss:

  • The history of the design argument
  • Current versions of the design argument, and their strengths and weaknesses

Also, Dr. Manson unwittingly gave me a legitimate reason to mention Raptor Jesus!

Download CPBD episode 051 with Neil Manson. Total time is 1:04:32.

Neil Manson links:

Links for things we discussed:

Note: in addition to the regular blog feed, there is also a podcast-only feed. You can also subscribe on iTunes.

Previous post:

Next post:

{ 39 comments… read them below or add one }

Martin June 30, 2010 at 7:05 am

After perusing through an admittedly small sample size of publications on his website, I can’t for the life of me tell what side of the God Question he falls on.

I love it!

  (Quote)

Justin Martyr June 30, 2010 at 9:01 am

In God and Design, Manson rejected fine-tuning based on the McGrew’s argument. But I don’t buy the McGrew’s objection for a few reasons. (1) this is a running issue in Bayesian statistics and the standard solution is to simply ignore the problem. Our tools aren’t as good as we’d like, but that doesn’t mean that they don’t have value. (2) we’re talking about epistemic probability, not metaphysical probability. Thus we are implicitly applying the principle of indifference (3) you can choose different axioms of probability (I don’t know much about this, but I’ve heard of mathematicians doing it) for uncountable infinite, (4) Alexander’s Pruss’s response. Simply choose a boundary point high enough to sustain fine-tuning. E.g. pick a boundary of 10^1000. That gives us two disjoint subsets. That means the constant must be in one of these two subsets. We can’t meaningIf the actual constant is in the upper subset then the probability of fine-tuning by chance is 0. If it is in the lower subset then the probability of fine-tuning by chance is low enough to support the fine-tuning argument.

  (Quote)

JS Allen June 30, 2010 at 9:04 am

@Martin – Isn’t it kind of pathetic that we always want to run people through our litmus tests?

@luke – How long do these sit in the queue before your publish them? Castingwords.com will give you a transcript within 6 days for about $100, which would enable a lot more people to enjoy the interviews. Audio is terribly inefficient; even if I play these back at 1.5x speed, it’s still way slower than reading. And it’s rare that I’m in an environment where I can attend to audio anyway, so I’ve only been able to listen to a couple of these.

  (Quote)

Haukur June 30, 2010 at 9:28 am

Audio is terribly inefficient; even if I play these back at 1.5x speed, it’s still way slower than reading.

Yes, but it’s efficient in some other ways. I can listen to an interview while playing with my daughter – I can’t do that while reading.

  (Quote)

Martin June 30, 2010 at 9:41 am

Isn’t it kind of pathetic that we always want to run people through our litmus tests?

True. But I always find it refreshing when I encounter someone who seems to be examining the arguments from a neutral third-party position.

  (Quote)

lukeprog June 30, 2010 at 10:35 am

Yeah, I like to listen to podcasts while driving.

  (Quote)

Alex June 30, 2010 at 11:18 am

@JSAllen: It’s terribly pathetic, but the first thing I always find myself looking at (after the name of the interviewee) is whether “philosopher” or “Christian philosopher” is in the first sentence.

@luke: With so many other people talking about fine-tuning, you now *really* have to try to get Nick Bostrom on the show.

  (Quote)

Matthew D. Johnston June 30, 2010 at 1:06 pm

you can choose different axioms of probability (I don’t know much about this, but I’ve heard of mathematicians doing it) for uncountable infinite

You mean statisticians? (Yeah, okay, so the resident mathematician takes offense.) :p

Dealing with uncountable infinities does not entail a change to the axioms. Statisticians deal with continuous random variables (even continuous unbounded random variables) all of the time. The only change is that they deal with intervals or regions of integration (over some region of the probability space) rather than individual events.

To use the example presented in the article, the probability of hitting any particular point on a dartboard is vanishingly small – for all intents and purposes, it is zero. However, the probability of hitting the bullseye (a well-defined area) is clearly non-zero (if you have any skill!). If every spot on the board were as likely as any other to be hit (principle of indifference), it would just be the bullseye area divided by the area of the whole board.

The problem is that you cannot satisfy the axioms of probability if you assume the principle of indifference with an unbounded domain. Imagine I told you that I have picked a integer between 1 and infinity with no preference whatsoever between them. How do you make sense of that? The odds of any particular choice are zero, but it gets worse than that. What are the odds of me picking a number lower than a million, or a billion, or a trillion? They’re all zero, so I must have picked a number larger than that. But then you would have to conclude I picked no number at all! (The same argument can be applied to continuous models as well by integration of regions and weighing them relative to the unbounded region.)

They point out, correctly I think, that the problem is the principle of indifference itself – it is, after all, based only on our own ignorance. In statistics, you would look for a probability distribution to smooth out the edges, but at the article points out, that can be a problem. Any statistical model would have to be based on some empirical data and the only data we have is that our universe exists and it is life supporting. You can therefore insert whatever bias you want to into the model because there’s nothing to defeat (or support) it.

  (Quote)

exapologist June 30, 2010 at 1:59 pm

Another great interview, Luke!

Btw, did you two discuss the multiverse hypothesis? If so, I missed it.

  (Quote)

urbster1 June 30, 2010 at 2:11 pm

Hey, I noticed you upped the sample rate and bitrate of the MP3, and it sounds much better! Thanks for this, and for the great interview, as always!

  (Quote)

Hermes June 30, 2010 at 2:51 pm

lukeprog: Yeah, I like to listen to podcasts while driving.

Driving, playing with kids, making a baby sandwich. The simple things in life.

  (Quote)

Martin June 30, 2010 at 2:52 pm

making a baby sandwich

Whaaaaaa….??!!

  (Quote)

Hermes June 30, 2010 at 2:54 pm

What? What’s wrong with having a calorie-reduced diet? Besides, I can whip up a dozen sandwiches and not get stuffed! :-}

  (Quote)

lukeprog June 30, 2010 at 4:34 pm

exapologist,

We did not discuss it, really, no.

I somehow had not realized that all versions of design argument depended on intrinsic value (what Neil calls ‘objective’ value). Do you agree? If so, this is really damning, because there is no evidence whatsoever for intrinsic value.

  (Quote)

jebediah June 30, 2010 at 4:39 pm

I haven’t listened to the interview yet. Is this guy an ID proponent?

  (Quote)

exapologist June 30, 2010 at 5:19 pm

Luke,

I’m not sure if I agree with that. Suppose some subjectivism — or even nihilism — about moral facts were true. Still, couldn’t the existence of our universe be explained in terms of “because God really digs universes like this one”?

Also, couldn’t one infer cosmic design even if one hadn’t the slightest idea what interests or motives or purposes God has? So, for example, suppose there’s some feature of some object in the universe that we know to come from intelligent agency. Suppose further that we knew it was impossible to get that feature without intelligent agency — nature just can’t do it (at least given the actual laws of nature). Then couldn’t we legitimately infer design? (Del Ratzsch calls this kind of design indicator “counterflow”).

In any case, I could be wrong, but those are some initial concerns about Manson’s point (one asserted by Draper as well, I believe). What do you think?

  (Quote)

Hermes June 30, 2010 at 5:30 pm

Luke: I somehow had not realized that all versions of design argument depended on intrinsic value (what Neil calls ‘objective’ value). Do you agree? If so, this is really damning, because there is no evidence whatsoever for intrinsic value.

As valid as Platonic forms, eh?

  (Quote)

Lee A.P. June 30, 2010 at 5:39 pm

I thought this was a baby sandwhich : http://www.killsometime.com/pictures/files/1463.jpg

  (Quote)

Hermes June 30, 2010 at 5:41 pm
Hermes June 30, 2010 at 5:49 pm

Lee A.P., well, yes, but not every day! Pshaw!

  (Quote)

lukeprog June 30, 2010 at 6:48 pm

exapologist,

Intuitively, I agree with you and disagree with Manson on those two points. (However, Manson didn’t say that intrinsic moral value was required for design arguments, but merely intrinsic value.) But I haven’t thought it through or read the relevant literature. And as I’m sure you know, I don’t trust my intuitions, so I’m completely agnostic about these two issues.

  (Quote)

lukeprog June 30, 2010 at 6:49 pm

Atheists! After they legalize marijuana they will try to legalize baby sandwiches.

  (Quote)

Hermes June 30, 2010 at 7:15 pm

So, you’ve heard about the new school lunch program? It has bipartisan support.

Main dish: Baked baby with half-baked baccon? It’s a mandatory meal, you know. Devil Weed is conducive to making babies and thus providing a resource for the school lunch programs. After all, condoms are evil, yet abstinence really works [Palin: You betcha! wink! wink!] Plus, with the low cost of due to a glut of babies, a larger tax cut can be provided while the lunches remain revenue neutral. Win-Win!

  (Quote)

Mark June 30, 2010 at 8:54 pm

(1) this is a running issue in Bayesian statistics and the standard solution is to simply ignore the problem. Our tools aren’t as good as we’d like, but that doesn’t mean that they don’t have value.

That doesn’t inspire much confidence.

(2) we’re talking about epistemic probability, not metaphysical probability. Thus we are implicitly applying the principle of indifference

Not all versions of the principle of indifference are plausible. Some notoriously lead to contradiction. There are formulations of restricted versions of the principle, e.g. by John Norton, according to which fine-tuning would provide no evidence for anything. I think Jill North has a recent paper arguing for restricting the principle to cases of physical symmetry, or something like that (I haven’t read it). It all depends on some much larger debates about the adequacy of “orthodox” Bayesianism: how and when do we model belief states as probabilities?

(3) you can choose different axioms of probability (I don’t know much about this, but I’ve heard of mathematicians doing it) for uncountable infinite,

Obviously you can have probability measures on uncountably infinite sets. The uniform distribution on the interval [0, 1] is a trivial example. The issue is normalizeability, not uncountability.

(4) Alexander’s Pruss’s response. Simply choose a boundary point high enough to sustain fine-tuning. E.g. pick a boundary of 10^1000. That gives us two disjoint subsets. That means the constant must be in one of these two subsets. We can’t meaningIf the actual constant is in the upper subset then the probability of fine-tuning by chance is 0. If it is in the lower subset then the probability of fine-tuning by chance is low enough to support the fine-tuning argument.

Let F = fine tuning, A = the constants landing in the lower subset and B = the constants landing in the higher subset. Then P(F) = P(F|A)P(A) + P(F|B)P(B). Obviously P(F|B) = 0 and P(F|A) is low if you choose a large enough lower subset and attach a uniform distribution to it. This would make P(F) low given those suppositions. But you run into the problem of choosing the bounds of the lower subset in a non-arbitrary way. If you make it low enough that P(F|A) is high, then you have to look at the size of P(A), which takes you back to the whole renormalization problem.

  (Quote)

Mark June 30, 2010 at 8:59 pm

After perusing through an admittedly small sample size of publications on his website, I can’t for the life of me tell what side of the God Question he falls on.

I love it!

I’m pretty sure he’s a Christian, but I may be recalling incorrectly?

  (Quote)

Alex July 1, 2010 at 3:19 am

@luke: Even if you suppose that intrinsic value is required to make design arguments work (I haven’t been persuaded by this, but am currently reading Manson’s paper arguing for this), I don’t think it changes the epistemic import of the argument (if the argument is Bayesian).

Take V to denote the proposition that intrinsic value exists, and G the proposition that God exists. If one thinks (as seems inevitable on conceptual grounds) that G entails V (and therefore, P(G) = P(G&V)) then your opinion about intrinsic value doesn’t make any difference in a Bayesian design argument (although it could decrease your *prior* probability of G if you think V is implausible and G implies V).

  (Quote)

Justin Martyr July 1, 2010 at 8:51 am

Hi Mark,

Thank you for the thoughtful response! I appreciate it :)

I’ll check out the John Norton’s paper. I had to google around but I found it on his website.

1. The point about applying Bayesian statistics to infinities is that you do not normalize to one. Then you can apply a measure using infinities. Alexander Pruss has a sketch of how this works here.

2. In regards to Pruss’ defense (not using infinities) the boundary is arbitrary. There is no reason to choose a non-arbitrary boundary. Any boundary that is above the value of the physical constants will sustain fine-tuning, and any value that is too low won’t work because we’ll be in the upper region.

  (Quote)

Justin Martyr July 1, 2010 at 11:02 am

I have only briefly perused John Norton’s paper, but the first impression is not good. He hinges his case against fine-tuning by pointing to the well-known problem of paradoxes when you apply the principle of indifference to a continuous property.

Well, suppose were were drawing sticks from an urn. Suppose I guessed that the stick was between 9 inches and 9.01 inches long. Sure enough, I draw a stick and its length falls in that range. According to Norton, we would be committing a logical fallacy to be surprised.

  (Quote)

J Wahler July 1, 2010 at 4:16 pm

Luke,

About halfway through the interview professor Manson is talking about the theological implications of the doctrine of incarnation in a universe with multiple worlds…you then comment with something to the effect that “yeah, that’s a 20th century theological question to be asking.” I think your mistaken here, Thomas Aquinas mulls over many-worlds incarnations in his Summa Theologica in part 3, and Thomas Paine even dabbles with it in his Age of Reason in parts XV and XVI. Interestingly they both argue that multiple worlds or even infinite worlds exist, but in different directions concerning the incarnation. Aquinas argues that there was obviously multiple incarnations and Paine uses multiple worlds to argue that the incarnation is therefore absurd. FYI. Keep up the great work, thanks.

  (Quote)

lukeprog July 1, 2010 at 5:52 pm

J Wahler,

I had no idea, thanks!

  (Quote)

Matthew D. Johnston July 2, 2010 at 11:00 am

The point about applying Bayesian statistics to infinities is that you do not normalize to one. Then you can apply a measure using infinities. Alexander Pruss has a sketch of how this works here.

But the problem isn’t in principle one of measure (as they are when dealing with Dirac delta functions, irrational numbers, etc.). Traditional statistical methods are quite capable of handling infinity (both the countable and uncountable varieties).

It is the conjunction of the principle of indifference with an unbounded probability domain that produces the problem, so it seems to me that we should argue against one of these assumptions before giving any discussion to abolishing the standard axioms of probability.

Well, suppose were were drawing sticks from an urn. Suppose I guessed that the stick was between 9 inches and 9.01 inches long. Sure enough, I draw a stick and its length falls in that range. According to Norton, we would be committing a logical fallacy to be surprised.

I do not think anybody would model this situation using the principle of indifference over an unbounded domain.

  (Quote)

Mark July 2, 2010 at 2:14 pm

1. The point about applying Bayesian statistics to infinities is that you do not normalize to one. Then you can apply a measure using infinities. Alexander Pruss has a sketch of how this works here.

That’s one idea; but AFAIK the standard move is to reject the requirement of countable additivity on the probability measure rather than rejecting normalization to one.

2. In regards to Pruss’ defense (not using infinities) the boundary is arbitrary. There is no reason to choose a non-arbitrary boundary. Any boundary that is above the value of the physical constants will sustain fine-tuning, and any value that is too low won’t work because we’ll be in the upper region.

Suppose the life permitting range is the interval [2, 3]. Let F = the event of landing in [2, 3]. Suppose we choose the boundary to be at 1, so that the lower set is A = [0, 1] and the upper set is B = (1, infinity). Then P(F) = P(F|A)P(A) + P(F|B)P(B). Obviously P(F|A) = 0. Probably you think P(B) = 1. Now, what’s your justification for the claim that P(F|B) = 0? Is it that F is finite, B is infinite and the principle indifference therefore entails that ~F is almost sure? But in that case, you could’ve just declared P(F) = 0 from the outset because it’s finite. So either 1. the argument arbitrarily depends on choosing the boundary to be greater than the life-permitting range, or 2. the argument carries no extra force.

I have only briefly perused John Norton’s paper, but the first impression is not good. He hinges his case against fine-tuning by pointing to the well-known problem of paradoxes when you apply the principle of indifference to a continuous property.

Well, suppose were were drawing sticks from an urn. Suppose I guessed that the stick was between 9 inches and 9.01 inches long. Sure enough, I draw a stick and its length falls in that range. According to Norton, we would be committing a logical fallacy to be surprised.

I’m not exactly sure what you think that example rebuts; he’s not claiming that the principle of indifference in your sense is always inapplicable. His case comes out of a larger discussion of the inadequacy (or, more accurately, the incompleteness) of Bayesian notions of evidence. Basically, he thinks there is a relation between beliefs he calls “neutral support,” and in cases where we have neutral support for a proposition, we are in a doxastic state which he names “Ignorance” and which cannot be modeled by probabilities. Norton thinks we are in a state of Ignorance vis-à-vis the value of the physical constants. He wouldn’t say we’re in that state when it comes to your urn example, since we have non-neutral support for the probability distribution of human beings guessing numbers at random.

  (Quote)

Justin Martyr July 2, 2010 at 4:59 pm

Hi Mark,

That’s one idea; but AFAIK the standard move is to reject the requirement of countable additivity on the probability measure rather than rejecting normalization to one.

That’s true, but either approach should work. The point being, the so-called renormalization problem isn’t a problem with different tools.

Suppose the life permitting range is the interval [2, 3]. Let F = the event of landing in [2, 3]. Suppose we choose the boundary to be at 1, so that the lower set is A = [0, 1] and the upper set is B = (1, infinity). Then P(F) = P(F|A)P(A) + P(F|B)P(B). Obviously P(F|A) = 0. Probably you think P(B) = 1. Now, what’s your justification for the claim that P(F|B) = 0?

There is none in that case. The renormalization problem strikes if you choose a boundary lower than the life-permitting range. A necessary condition for avoiding the renormalization problem is to pick a boundary above the life-permitting range.

I’m not exactly sure what you think that example rebuts; he’s not claiming that the principle of indifference in your sense is always inapplicable. His case comes out of a larger discussion of the inadequacy (or, more accurately, the incompleteness) of Bayesian notions of evidence. Basically, he thinks there is a relation between beliefs he calls “neutral support,”

If you read the paper, the way he gets to “neutral support” is by producing paradoxes using the principle of indifference. You can square the value of constants and get an entirely new set of equal probability sub-intervals. Cube them and get another. Raise to the 2.01 power and get a third set, etc… Eventually you are left with the fact that any interval is equal in probability as any other interval. Thus the life permitting range has the same probability as any other range you could suggest. Thus, any application of the principle of indifference based on epistemic probability would have “neutral support”.

  (Quote)

Mark July 2, 2010 at 5:16 pm

That’s true, but either approach should work. The point being, the so-called renormalization problem isn’t a problem with different tools.

Actually, neither approach is free of problems; Pruss listed some of the problems his route runs up against in decision theory. Though, truth be told, I don’t think the renormalization objection is all that strong for those bold enough to use alternative probability axioms.

If you read the paper, the way he gets to “neutral support” is by producing paradoxes using the principle of indifference. You can square the value of constants and get an entirely new set of equal probability sub-intervals. Cube them and get another. Raise to the 2.01 power and get a third set, etc… Eventually you are left with the fact that any interval is equal in probability as any other interval. Thus the life permitting range has the same probability as any other range you could suggest. Thus, any application of the principle of indifference based on epistemic probability would have “neutral support”.

Yes, he doesn’t say that. Crucially, the first sentences of the paragraph you reference are: “Take the multiverse example of Section 2. The evidence is completely neutral over different values of h.” Neutral support is the criterion for the paradox to have its bite; it is not that the paradox always applies, and thus the principle of indifference always provides neutral support.

  (Quote)

Jake de Backer July 3, 2010 at 11:19 pm

Was that Manson photo copied from one of his hip hop album covers?

J.

  (Quote)

Justin Martyr July 5, 2010 at 6:31 pm

Hi Mark,

Yes, but neutral support happens in the case of the multiverse because, after the paradoxes, any interval has the same probability as any other interval. You don’t need an infinite range to do that. If you draw sticks from an urn then the range of (1,2) has the same probability as (2,3) based on length. But based on length squared, then the interval (1,2) has the same probability as (2,7^1/2). Keep doing that and you reach the same outcome – one interval has the same probability as any other interval. Neutral support.

I think that shows that the problem is with our tools, not the underlying problems.

Moreover, Norton himself suggests agreement with working with variables in their simplist form. That is exactly what Collins does in his fine-tuning argument in The Blackwell Companion to Natural Theology.

  (Quote)

Mark July 6, 2010 at 7:17 am

Yes, but neutral support happens in the case of the multiverse because, after the paradoxes, any interval has the same probability as any other interval. You don’t need an infinite range to do that.

I agree that you don’t need an infinite range for the principle of indifference to yield paradox, and I never meant to suggest otherwise. But Norton agrees that “any interval has the same probability as any other interval” only because he thinks our background scientific beliefs neutrally support fine-tuning. Moreover, he thinks that the infinite range is at the very least suggestive of the need to model the case using ignorance rather than some alternate probability scheme. I.e., not all cases of neutral support involve indifference over infinite ranges, but indifference over infinite ranges is generally redolent of neutral support.

If you draw sticks from an urn then the range of (1,2) has the same probability as (2,3) based on length. But based on length squared, then the interval (1,2) has the same probability as (2,7^1/2). Keep doing that and you reach the same outcome – one interval has the same probability as any other interval. Neutral support.

Probably he’d agree that our evidence is partially neutral on the length of sticks – partial, in the sense that it’s totally neutral on where precisely in the interval (1, 2) their footage lies, even while non-neutral on the fact that their footage lies somewhere in (1, 2). I don’t see any problem with that diagnosis. But this doesn’t have the consequence that correctly guessing one of the stick’s length is unsurprising. For your probability of guessing a predetermined number in (1, 2) to within >3 decimals, given that your guess is independent of the selection of the number, is quite low. (Since human beings are reasonable, if imperfect, randomizers.) On the other hand, Norton probably would say that guessing a random number x, and then picking a stick and finding some physical property of the stick with a scalar value of x, is not surprising. This should be fairly obvious: no matter what x is, we can define a new unit of length on which the stick is precisely x units long!

Moreover, Norton himself suggests agreement with working with variables in their simplist form.

Hmm. Where exactly does he talk about this?

  (Quote)

Justin Martyr July 8, 2010 at 10:01 am

Hi Mark,

Your argument ultimately means that we cannot use epistemic probability on continuous domains in the absence of background information that gives us a “randomizer” or probability distribution. But I would take the principle of indifference not as a claim to objective probability, but as a useful heuristic that models our ignorance. That seems perfectly reasonable.

In any case, in The Blackwell Companion to Natural Theology Robin Collins makes an argument for doing so using natural variables. That means we should work with variables in their simplest forms. In some cases there is no easy choice. If we are drawing cubes out of an urn should we use length or volume? In these cases Collins suggests using the range of probabilities given by the allowable choices. But in physics, this is not an issue since the constants are already expressed in their simplest forms.

Collins then points out that physicists rely on these natural variables when they talk about predictive power. For example, if we were drawing cubes from an urn, a prediction is more accurate if we go on length than on volume. E.g. suppose we know the length is 10 inches +- 1 inch. Then our precision is to 1/10th. But by volume we’d have 11^3 – 10^3 / 10^3 ~ 1/3rd. So for example, one of the most successful predictions in physics is quantum electrodynamics. It predicts the gyromagnetic of an electron differs from 2 by a small amount. In fact, QED predicted this to 20 different places and physicists used this as powerful confirmation of their intuitions. But we could transform the equations so that the prediction became much less impressive. We could even make it so that the prediction is not statistically significant. Science advanced because of the probabilistic intuitions of physcists working with natural variables – variables in their simplest form.

  (Quote)

Pedro Amaral Couto August 8, 2010 at 10:53 am

Luke, maybe you’d like to watch God is not God from TrenchantAtheist‘s YouTube channel.

  (Quote)

Leave a Comment