The Goal of Philosophy Should Be to Kill Itself

by Luke Muehlhauser on January 17, 2011 in Science

After giving a talk on computers at Princeton in 1948, John von Neumann was met with an audience member who insisted that a “mere machine” could never really think. Von Neumann’s immortal reply was:

You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!1

The problem with most philosophy is that it is imprecise, and this leads to centuries of confusion. Do numbers exist? Depends what you mean by “exist.” Is the God hypothesis simple? Depends what you mean by “simple.” Can we choose our own actions? Depends what you mean by “can” and “choose.”

Many philosophers try to be precise about such things, but they rarely reach mathematical precision. On the other hand, artificial intelligence (AI) researchers and other computer scientists have to figure out how to teach these concepts to a computer, so they must be 100% precise.

What does it mean, precisely, to say that one hypothesis is simpler than another? The answer (lower Kolmogorov complexity) came not from philosophy, but from computer science. What does it mean, precisely, to proportion one’s beliefs to the evidence? The answer (Bayes’ Rule) came not from philosophy but from mathematics, and especially from implementations of Bayes’ Rule in AI (Bayesian networks). What does it mean, precisely, to say that one thing causes another? Once again, the answer (Pearl’s counterfactual account) came from computer science.

For millennia, philosophy has been the study of questions for which we don’t really know how to get the answers. Once we know how to get answers about a set of questions, we start calling that set of questions a science. We stopped philosophizing about the heavens when we invented the telescope and started doing astronomy, and we stopped philosophizing about biology when Mendel and Darwin and others discovered how to rigorously test biological theories against experience.

So philosophy has retreated to corners of thought too abstract to be answered so directly. And now, philosophy may again shrink as computer scientists do a better job of precisely defining concepts than philosophers do.

Philosophy may soon become the study of questions that are (1) not close enough to factual data to be answered by the sciences and/or (2) not yet formalized mathematically by computer scientists.

Philosophers can talk all they want about ontology and meaning and truth and free will, but I offer them the following challenge:

Do you know, precisely, what you are talking about? If so, show me precisely what you mean by programming it into a computer. If you know precisely what you are talking about, you will be able to explain it to a computer.

I suspect most philosophers, met with the challenge, will just admit they don’t know precisely what they are talking about. I philosophize about ethics, and I certainly admit that I don’t know precisely what I’m talking about. To mathematically formalize a theory of ethics, I would not only have to be much smarter than I am and be a computer programmer, I would also need a completed cognitive science and the new mathematics required to build Friendly AI.

Much philosophy – naturalistic philosophy, anyway – has this flavor. Consider a question like “What is desire?” The landmark work of the last decade on this was philosopher Tim Schroeder’s Three Faces of Desire. Half the book is neuroscience. The other half tries to clear up the concepts involved and make them more precise, though the book leaves things far short of mathematical precision, something which Schroeder himself seems to lament.

If philosophers want facts about the world or even facts about values, the way forward is to figure out how to hand over such questions to the scientists. If philosophers want to make conceptual progress, the way forward is to figure out how to hand over those concepts to computer scientists so they can make them precise enough for a computer to understand them.2

Thus, the way forward for philosophy of mind is to hand over the field to cognitive scientists and AI researchers. The way forward for ethics is to hand it over first to linguists and psychologists of language to figure out what moral claims might mean, and then to other scientists (perhaps cognitive scientists) to figure out the nature of whatever it is in the natural world that moral claims refer to. (And if moral claims can’t refer to anything in the natural world, then moral talk should be recognized as fictional talk.) The way forward for aesthetics looks much the same as for ethics. The way forward for philosophy of language is to figure out how to hand it over to linguists and computer scientists. The way forward for epistemology is to hand it over, also, to cognitive scientists and AI researchers – and that is already happening.

The goal of philosophy should not be to continue to give vague and mysterious answers to difficult questions. The goal of philosophy should be to figure out how to hand over its factual questions to scientists, and its conceptual questions to computer programmers, so that these questions can be answered.

The goal of philosophy should be to kill itself.

Of course, even if everyone agreed with me about the goal of philosophy, I doubt that philosophy would ever kill itself. But by continually handing its questions off to the sciences, philosophy should approach its own death asymptotically.

  1. Quoted in Jaynes, Probability Theory, page 7. []
  2. Of course, I reserve the right to change my mind in response charitable criticism! []

Previous post:

Next post:

{ 104 comments… read them below or add one }

Yair January 17, 2011 at 5:13 am

Well, I agree with the gist of it. I see philosophy as being principally about clarifying thought, and once it has been clarified what a field is and how to proceed to gain knowledge and apply it within that field than science and engineering take over.

However, two concerns come to mind.

First – what is Pearl’s counterfactual account of causation? The Wikipdeia link was uninformative.

Second – well, the fact that something is a useful definition of a concept under a certain paradigm, such as programming it, does not imply it is good in other senses. Specifically, concepts in our heads are complex patterns of neural activity. Clarification of concepts involves carefully aligning the meaning of words to these patterns while raising their distinctiveness and separability (broadly speaking). It is therefore not something that can be done a priori, for some blank slate computer, as it must refer to our own already-existing concepts and brains. For example, you can’t really clarify what “red” means without referring to the way our brain interprets and constructs colors.

Similarly, there isn’t necessarily one “correct” meaning of “simple”. It depends on what use you want to apply it. For the purposes of a Bayesian analysis, and under models that assume certain types of uncertainty and relations between free parameters and prediction accuracy, it is indeed correct that Kolmogorov complexity or message length can serve the role of “simplicity” within an extended Occam razor. However, when these conditions of uncertainty don’t prevail, this is no longer the case. Worse, these considerations don’t consider computational constraints, that are again model-dependent and may result in a “complex” theory being more applicable and hence, in some sense at least, preferable. And I have not even gone to critiques of the Bayesian approach itself (since we’ve been there plenty enough).

In short, I think the computer scientists are jumping the gun a bit, not realizing their own shortfalls. They make valuable contributions, but are not aware that their formulations often run into old philosophical problems or raise new ones. Philosophy should aim to kill itself, not to gouge its own eyes.

  (Quote)

Thomas January 17, 2011 at 5:15 am

Thus, the way forward for philosophy of mind is to hand over the field to cognitive scientists and AI researchers.

And thus, like a true eliminativist, we have “solved” the hard problem of phenomenal qualia just by ignoring it. This is because finding neural correlates or even causes of consciousness will never bridge the explanatory gap.

And this where we need philosophy, I think. Interpreting the emprical evidence and avoiding unjustified reductionism and scientism.

  (Quote)

Zeb January 17, 2011 at 5:45 am

So if a computer cannot think it, it ain’t worth thinkin’. Sounds like a recipe for distopia to me.

  (Quote)

Garren January 17, 2011 at 6:17 am

The way forward for ethics is to hand it over first to linguists and psychologists of language to figure out what moral claims might mean, and then to other scientists (perhaps cognitive scientists) to figure out the nature of whatever it is in the natural world that moral claims refer to. (And if moral claims can’t refer to anything in the natural world, then moral talk should be recognized as fictional talk.)

How serendipitous! I was just trying to figure out how to reply to someone who is asking me whether my linguistic approach to ethics would — taken all the way — eliminate talk of ethics in favor of talking about the things moral language is about. I was already leaning toward, “Yes, and that’s the only way out of arguing over these incoherent, overloaded labels” when I checked this blog. Feeling affirmed here.

  (Quote)

DaVead January 17, 2011 at 6:36 am

Please note that as for the precise meaning of the above-mentioned philosophical concepts, there is by no means some kind of professional consensus about Kolmogorov complexity, Bayes’ Rule, or Pearl’s counterfactual account, although they seem to be flouted in such ways on lesswrong.com.

I agree with Thomas. And, the naturalizing project of scientism ends up having to eliminate a lot more than qualia. Take Luke’s criterion for meaning:

“If so, show me precisely what you mean by programming it into a computer. If you know precisely what you are talking about, you will be able to explain it to a computer.”

This is very bizarre. What about brute facts or first principles, like concepts, properties, relations, or entities that might be taken as unanalyzable? Like, resemblance, force, or existence? Or what about the meaningfulness of certain experiences? The question of being? Will to power? Participation in a community? The meaning of literature and art? The phenomenology of beauty and love? The list goes on and on.

  (Quote)

Aaron January 17, 2011 at 7:19 am

On the other hand, artificial intelligence (AI) researchers and other computer scientists have to figure out how to teach these concepts to a computer, so they must be 100% precise.

Hmm…I’m not sure I agree here. If humans are, in some sense similar to AI, we almost always transmit concepts without 100% precision. I fail to see why a computer must then require this level of precision unless we are demanding of them something we do not demand of ourselves.

  (Quote)

PDH January 17, 2011 at 7:21 am

Well, I basically agree with Luke. Every now and then it’s worth taking a moment just to appreciate how far we’ve come. Whilst I concede Yair’s general point about not jumping the gun, if the goal of philosophy really is to kill itself, we should regard recent statements that ‘philosophy is dead’ as the greatest possible compliment a philosopher can receive, premature though they may be.

  (Quote)

PDH January 17, 2011 at 7:26 am

DaVead wrote,

This is very bizarre.What about brute facts or first principles, like concepts, properties, relations, or entities that might be taken as unanalyzable?Like, resemblance, force, or existence?Or what about the meaningfulness of certain experiences?The question of being?Will to power?Participation in a community?The meaning of literature and art?The phenomenology of beauty and love?The list goes on and on.  

You don’t think that that list constitutes a pretty comprehensive catalogue of concepts about which people have been consistently confused? If nothing else these are clearly areas in which more precision is welcome.

  (Quote)

Daniel January 17, 2011 at 7:41 am

Can this be said for certain science as well?

Like cosmology, and biology? Aren’t they there to understand our universe? To answer unanswered questions? Shouldn’t the goals of those sciences be to cover all ground, so we know everything there is to know?

Of course, it’s unlikely we’ll ever know something as infinite as our universe, but shouldn’t the carrot being held in front of us, be the ideal of finding out everything?

One of the few professions I can think of who’s goal should NOT be to kill itself are those in the area of art, who’s goal is to always find something new.

  (Quote)

Garren January 17, 2011 at 7:48 am

A naturalizing project isn’t a bad thing if what it eliminates wasn’t useful or true anyway.

For example, I think G.E. Moore’s notion of goodness as a “simple, undefinable, non-natural property” is both false (it doesn’t correspond to anything actual) and unhelpful (he set philosophers on a hopeless trail).

As for things like “the meaning of art,” perhaps those are interesting topics of conversation that we could still have even if we realize we’re doing something other than discovering an objective fact.

  (Quote)

Thomas January 17, 2011 at 8:04 am

Da Vead: “I agree with Thomas.”

Not surprisingly, I agree with Da Vead. :)

  (Quote)

Sabio Lantz January 17, 2011 at 8:12 am

Brilliant! But then, I ain’t brilliant so I fall for this sort of thing easily.
But your own confession spells out a limit to this approach. Meanwhile there is floundering. Nonetheless, the direction you ask us to remember as the ship just tries to stay afloat during the storm is important.
Commenters above who remind us the fact that life is nebulous at best are voices that need to also live in the dreamed AI.

  (Quote)

ThePowerofMeow January 17, 2011 at 8:27 am

The goal of medicine should be to kill itself too. Defeat disease. The goal of psychology should be to kill itself. Heal the mind forever. Etc. Etc. Etc.

If philosophy is defined more broadly as the way a person thinks, then it is laughable to think it could kill itself. Perhaps the goal of consciousness should be to kill itself?

  (Quote)

Reginald Selkirk January 17, 2011 at 8:39 am

Zeb: So if a computer cannot think it, it ain’t worth thinkin’. Sounds like a recipe for distopia to me.

What we have here is a failure to understand. What is this thing that a computer cannot think? Name it please.

Von Neumann’s point is that the anti-AI crowd is hiding in obscurity. I have had this discussion several times over the years. Someone says “computers don’t actually think. It’s just transistors flipping, etc. That’s not really ‘thinking.’” My response: people don’t actually think either. It’s just neurotransmitters crossing synapses and action potentials working their way down axons. But that’s not really “thinking,” is it? For computers, we (meaning people actually competent in computer science, not the general public) understand the mechanism in detail. For humans, we don’t and so we attribute the ambiguity to magic.

  (Quote)

Sharkey January 17, 2011 at 8:41 am

As a computer scientist, this mostly tracks with a realization I had not long ago: theoretical computer science could be considered a branch of philosophy. For instance, denotational semantics of programming languages is a method of asking, “how do we really know what a program means?”

In my earlier days, I would have scoffed at the question; run the program on a computer with a certain chip (Pentium, PowerPC, etc) and that’s what it means. However, that places computer science on an imprecise physical foundation and reliant on the whims of computer manufacturers. Instead, denotational (or operational) semantics is a way of defining meaning in terms of math, and so one can “really say” what a program does in mathematical language, the most precise foundation we have (even considering the theorems by Turing, Godel et al). Existence or impossibility results help show the extent or our limits of our knowledge.

Asking questions about why we believe certain things to be true is the essence of philosophy, and theoretical computer science embodies that idea completely. Computer science won’t kill philosophy, it will just rename it.

  (Quote)

Garren January 17, 2011 at 8:50 am

Reginald,

I have had this discussion several times over the years. Someone says “computers don’t actually think. It’s just transistors flipping, etc. That’s not really ‘thinking.’”

I suspect they may be doubting whether computers can ever have the internal experience of consciousness.

  (Quote)

Landon Hedrick January 17, 2011 at 9:04 am

Wait, “we stopped philosophizing about biology”? That will come as a surprise to all of the philosophers who specialize in the philosophy of biology! Maybe they don’t realize that that particular sub field in philosophy has already committed suicide?

  (Quote)

Reginald Selkirk January 17, 2011 at 9:36 am

I suspect they may be doubting whether computers can ever have the internal experience of consciousness.

I doubt that you could specify an internal experience which computers could not have. You are using “consciousness” as a magical term of obscurity.

  (Quote)

Kutta January 17, 2011 at 9:37 am

The goal of medicine should be to kill itself too.Defeat disease.The goal of psychology should be to kill itself.Heal the mind forever. Etc. Etc. Etc.

Both seem to be non sequitur to me. Medicine defeats disease, psychology understands and models the psyche. Where is self-defeat?

  (Quote)

Adito January 17, 2011 at 9:41 am

This is one of the few posts you’ve made were I barely agree with anything you say. Kolmogorov complexity must have a firm philosophical foundation to measure complexity. Does it? Or is it still in dispute? I suspect it’s the latter. The same goes for the other subjects you’ve ejected from philosophy. In particular you’re going to have to show me quite a bit of evidence to demonstrate that the problem of causation has been solved.

I also have strong doubts about whether everything that describes the human condition can be programmed into a computer. Certain questions easily lend themselves to this sort of analysis but why should we say that all questions will? That’s still an open question and we’ll need a lot more philosophy to demonstrate that we should or should not head in this direction. In this essay it seems like you’re just assuming that it already has been decided.

  (Quote)

Steven R. January 17, 2011 at 9:53 am

The goal of medicine should be to kill itself too.Defeat disease.The goal of psychology should be to kill itself.Heal the mind forever. Etc. Etc. Etc.If philosophy is defined more broadly as the way a person thinks, then it is laughable to think it could kill itself.Perhaps the goal of consciousness should be to kill itself?  

What’s wrong with saying medicine and psychology should kill itself? I would *hope* that one day menial labor and diseases are things of the past. On the other hand, why should consciousness kill itself? It’s what would allow us to enjoy the end results.

—-

Personally, I can’t help but think of philosophy as interesting exercises of the mind that should not be taken any more seriously than the wild ruminations of a toddler. I ask my little sister how things work and she comes up with elaborate explanations of how the lights turn on or why rain falls. I think philosophy is the same, it’s human ingenuity to complex questions but until we actually examine the circuits of lights or the dynamics of the weather, whatever conclusions we come to in philosophy are nothing but human imagination.

  (Quote)

Gregorylent January 17, 2011 at 9:59 am

Mystics read this, smile quietly, go back to what they were doing.

  (Quote)

Garren January 17, 2011 at 10:42 am

Reginald,

I doubt that you could specify an internal experience which computers could not have. You are using “consciousness” as a magical term of obscurity.

It is likely computers can duplicate any function of the human brain, but I don’t know of anything in current science which justifies a jump from computational function to internal experience.

My favorite explanation of the Hard Problem of Consciousness is Chalmers’ Facing Up to the Problem of Consciousness. Or is there some satisfying answer I just haven’t heard of? Please share!

  (Quote)

thepowerofmeow January 17, 2011 at 11:01 am

If medicine defeats disease then it is no longer needed. If philosophy leads us to some perfect scientific conclusion, then it is no longer needed.

Since everything good in some way tries to fix something bad, saying philosophy should defeat itself doesn’t make any more sense than the medicine comment.

Surely philosophy is trying to think well. And any alleged outward reality is processed through our internal sensations and thoughts – so scientific pursuit must be processed through a philosophy in order to be understood in any way. But I am probably defining philosophy more broadly than the author intended here.

  (Quote)

Steven R. January 17, 2011 at 11:04 am

Mystics read this, smile quietly, go back to what they were doing.  

AKA, being ignored by most of the world.

@ ThePowerofMeow:

I still don’t see the problem. If all of philosophy’s questions are answered, then there’s barely a point in continuing to study it, just like medicine shouldn’t seek to perpetuate diseases so that more medicine can be made.

  (Quote)

Garren January 17, 2011 at 11:20 am

@Luke

It all depends on whether Philosophy is viewed as a set of problems or a practice. I understand your post title as meaning:

The Goal of Philosophypractice Should Be to Kill Philosophyset of problems

…which is not nearly as controversial as how some are taking it. Still, I agree it’s worth saying.

  (Quote)

Garren January 17, 2011 at 11:22 am

Apparently the preview function supports subscript tags, but the actual posts do not.

Bummer.

  (Quote)

thepowerofmeow January 17, 2011 at 11:26 am

Garren,

I’m sorry, I am sure I am not being clear at all. I guess I am just saying that philosophy is a good thing and it seems to disvalue it to claim it shouldn’t have to exist. Medicine and government shouldn’t have to exist, but they do and they are goods (in theory anyway!).

Houses shouldn’t have to exist, because the weather should be nice enough to live outdoors, etc, etc.

I guess I don’t disagree with the post, I am not sure that it helps to think this way, I suppose.

  (Quote)

Thomas January 17, 2011 at 11:36 am

Reginald: “I doubt that you could specify an internal experience which computers could not have. You are using “consciousness” as a magical term of obscurity.

Garren: “It is likely computers can duplicate any function of the human brain, but I don’t know of anything in current science which justifies a jump from computational function to internal experience.

Excatly. And there is nothing “magical” in internal conscious experience, or as Galen Strawson puts it, “experience, “consciousness”, conscious experience, “phenomenology”, experiental “what-it´s-likeness”, feeling, sensation, explicit conscious thought as we have it and know it at almost every waking moment . . .” This is the most familiar and certain thing in the world to us. And if you can´t make experience fit to physicalism, then maybe we have here an argument against physicalism, not against experience.

  (Quote)

Reginald Selkirk January 17, 2011 at 11:41 am

Garren: It is likely computers can duplicate any function of the human brain, but I don’t know of anything in current science which justifies a jump from computational function to internal experience.

Obscurantism. I keep asking you to specify your terms, and you keep dodging by switching to new, more obscure terms. Already we have gone from thinking to consciousness and now to “internal experience.”

BTW, your conscious mind is not aware of everything your brain does. Consider the well-known chess board optical illusion. The two squares are the same colour, although they do not appear to be. That’s because there is some shadow compensation filter being applied, but you don’t “feel” it being applied, it just happens, and the result is the only thing your conscious mind “sees.”

Translated into your obscurantist lingo, your “internal experience” is less complete than you would like to imagine. Qualia, my ass.

  (Quote)

anon January 17, 2011 at 11:49 am

Reginald,

I don’t understand how the chess board optical illusion is relevant to your discussion with Garren. Can you say more about that? How does the fact that the mind isn’t aware of the processes that generate mental states relevant here?

  (Quote)

Ex Hypothesi January 17, 2011 at 11:52 am

Luke,
it seems that you think we only philosophize when we don’t understand what we’re talking about, yet you think the only way to find out is to hand the questions over to the scientists. Is getting more “precise” about what we mean a philosopher’s job or the scientist’s?

  (Quote)

Luke Muehlhauser January 17, 2011 at 12:10 pm

Ex Hypothesi,

Getting more precise is usually a collaboration between science and philosophy, though at the early stages this process is usually philosophical. Again, I’m not saying we can ever be rid of the meta-questions that fill philosophy. I’m actually making a rather mild (I think) extrapolation from the past history of philosophy. Centuries ago, almost everything was philosophy, but philosophy has successfully handed off a huge number of questions over to various sciences. As we speak, philosophy of mind is being handed off to cognitive science, though philosophers (Fodor, Dennett) are still making valuable contributions, and assisting in clarifying certain issues for scientists.

  (Quote)

Luke Muehlhauser January 17, 2011 at 12:14 pm

Thomas,

If you want to place your bets on some form of dualism, go for it. But that’s a losing bet, I think. Dualism keeps losing.

  (Quote)

Ex Hypothesi January 17, 2011 at 12:26 pm

“Again, I’m not saying we can ever be rid of the meta-questions that fill philosophy.”

Fair enough. That means philosophy has a goal (death) that it can never achieve. How backhandedly Platonic of you!

  (Quote)

Reginald Selkirk January 17, 2011 at 12:32 pm

anon: How does the fact that the mind isn’t aware of the processes that generate mental states relevant here?

Because he brought up “internal experience.” I gave an example of the brain/mind at work with no available “internal experience.” Perhaps I am on the wrong track, but since he has not defined his term nor provided any examples, I did what I could.

  (Quote)

Ex Hypothesi January 17, 2011 at 12:39 pm

Luke,

I’ve been waiting (for years now) to see your reaction to Kuhn. I still think he’s a serious threat to your fundamentalist scientism on stilts (to borrow a phrase from one of your guests). Have you read the “The Structure…” yet? Perhaps you think your Bayesianism gets you out?

  (Quote)

Garren January 17, 2011 at 12:46 pm

Reginald,

Because he brought up “internal experience.” I gave an example of the brain/mind at work with no available “internal experience.”

Demonstrating an unconscious function only shows that our minds operate in a way that can’t be fully explained by internal experience. Doesn’t everyone already agree with that?

  (Quote)

Zeb January 17, 2011 at 12:58 pm

Reginald

What we have here is a failure to understand. What is this thing that a computer cannot think? Name it please.

The premise of the OP is that a computer cannot think things that are not precisely defined, and so those things that cannot be programmed into an AI are not worth thinking about. And by ‘precisely defined’ OP means reduced to numbers and mathematical relationships. I fear that the biases toward answers that can work for AI and against questions, concepts, and observations that cannot be translated to AI are a destructive force. I doubt you need me to name any concepts that are too imprecisely defined for a computer to think them right now (but just for starters, free will, qualia, humor…). But there may be some things that are real and that cannot in principle be reduced to numbers, some could be but that humans never will reduce to numbers, and some that simply won’t be for a very long time. The OP seems to suggest that we should have a bias towards the answers (and formulations of questions) that work for AI, and a bias against thoughts that we can’t make work in AI. That sounds to me like a dangerous prescription that will lead to unpleasant outcomes.

  (Quote)

Zeb January 17, 2011 at 1:12 pm

Reginald, to address the second part of your response to my comment, I tentatively agree with you that human thinking, if that refers only to the processing aspect of what the mind does, is purely mechanical and perfectly analogous to computer processing. So I expect that potentially AI could truly think just like humans can. My doubts are about actual human ability to program AI to think like we do, because there may be aspects of reality that are not mechanical/mathematical, and some that we may never be able to define mathematically.

  (Quote)

Thomas January 17, 2011 at 1:25 pm

Luke,

the bet is on…

And the claim than “dualism keeps loosing” depends on your assumptions. If human agency isn´t reducible, then dualism certainly isn´t loosing. And if one has some reasons to believe that there exists a God, then dualism is a pretty good bet.

  (Quote)

DaVead January 17, 2011 at 1:37 pm

Ex Hypothesi, I agree with you about Kuhn, though I think he doesn’t go quite far enough. I’m waiting on a new, less-Realist, less classically materialistic scientific paradigm to emerge once we probe deep enough into the mysteries of the quantum and sub-quantum worlds and realize that no neat-and-tidy Scientism-science is possible.

Thomas, I agree with you some more! Also, I don’t think placing your bets on some form of dualism is a losing bet. Dualism is a category of a lot of different theories and isn’t just equivalent to Cartesian substance dualism. What’s key here is that dualism entails that consciousness is somehow fundamental, and its dependency on physical processes is weaker than many physicalists believe. Physicalism, or physicSalism as Strawson calls it, is absurd in that it rests on a Realist interpretation of fundamental and theoretical physics. I think physicalism has gained so much popularity because of its naturalistic aesthetic for the many philosophers who really like the idea of a desert-like atomistic universe.

  (Quote)

Sharkey January 17, 2011 at 2:02 pm

DaVead: “I’m waiting on a new, less-Realist, less classically materialistic scientific paradigm to emerge once we probe deep enough into the mysteries of the quantum and sub-quantum worlds…”

Technically, you’ve already got it. Quantum physics is the definition of “less-classically materialistic”. Entanglement, uncertainty, wave-particle duality, the list goes on, all very much “non-classical”. However, from my small amount of studying the subject, I don’t think you’ll find what you’re looking for down there.

To read your post, you believe there is a physical explanation for consciousness, just not the one cognitive scientists are slowly agreeing upon (a heretofore-undiscovered property of quantum systems coupled to specific biological patterns, versus a chemical/electrical computational system). Along with Penrose and Hameroff, you’re in small company, and shoulder the burden of proof.

  (Quote)

woodchuck64 January 17, 2011 at 2:15 pm

Zeb,

I doubt you need me to name any concepts that are too imprecisely defined for a computer to think them right now (but just for starters, free will, qualia, humor…)

Free will is an excellent example where precision is important. Are you libertarian or compatibilist? If the latter, I’m certain you got there by forcing “free will” to be precise and coherent. (If the former, my point escapes into the void)

  (Quote)

Tony Hoffman January 17, 2011 at 2:40 pm

“And thus, like a true eliminativist, we have “solved” the hard problem of phenomenal qualia just by ignoring it.”

I don’t think it’s the fault of anyone but philosophers that they can’t describe the “hard problem of phenomenal qualia” in a way that can be investigated. I believe that Dennett feels that the inability of philosophers to describe the problem of qualia in a meaningful way is an indictment of the “problem” (as in, maybe there isn’t one), and I tend to agree with him.

But also to the point, what has philosophy solved regarding qualia? And would you disagree that philosophy has in fact solved a countless number of other problems, namely by framing them in a manner that can actually be investigated?

  (Quote)

Tony Hoffman January 17, 2011 at 2:44 pm

“If human agency isn´t reducible, then dualism certainly isn´t loosing.”

Isn’t this just an argument from ignorance?

Can you define dualism in a way that would allow it to be falsified? If not, what would alter your view that dualism is somehow viable?

  (Quote)

Reginald Selkirk January 17, 2011 at 3:28 pm

Zeb: I doubt you need me to name any concepts that are too imprecisely defined for a computer to think them right now (but just for starters, free will, qualia, humor…)

Isn’t the middle example a category error? After all, you don’t think a quale, you experience it.

Which leads me to: Do computers experience qualia too? After all, you might understand completely the instruction set of a computer and the architecture of the program running on it, but you will never experience what it’s like to have FF0000 in your visual input buffer.

Presuming your answer is yes, computers do experience qualia, I then want to turn to Thomas and ask whether computer qualia are also a threat to physicalism.

On a tangent: free will is so poorly defined that humans have no business believing they have it.

  (Quote)

Reginald Selkirk January 17, 2011 at 3:35 pm

Garren: Demonstrating an unconscious function only shows that our minds operate in a way that can’t be fully explained by internal experience. Doesn’t everyone already agree with that?

I guess I must have been on the wrong track. Tell me more about these internal experiences that are such a threat to AI and to physicalism. Are they a property of human minds only, or do other animals experience them? How about bacteria, for example? We may be able to understand at a scientific level, and experiment with, something like chemoreception, but after all we will never experience what it’s like to have malic acid trigger your chemorecptors in exactly the same way that E. coli experience it.

What about plants? They don’t have minds in quite the same way we do, but certainly plants have experiences. We will never know exactly what it’s like to be taking in full sun on all leaf surfaces while photosynthetically churning away.

  (Quote)

Reginald Selkirk January 17, 2011 at 3:42 pm

Ex Hypothesi, I agree with you about Kuhn, though I think he doesn’t go quite far enough. I’m waiting on a new, less-Realist, less classically materialistic scientific paradigm to emerge once we probe deep enough into the mysteries of the quantum and sub-quantum worlds and realize that no neat-and-tidy Scientism-science is possible.

This reminds me of a biography of Max Delbruck I once read. He was trained as a physicist, but switched over to biology. He was a fairly big name in 20th century genetics, particularly phage genetics. He started his career expecting that he would be helping to characterize the “life force” that set living things apart from mundane chemistry and physics. It dawned on him very slowly over the course of his career that no such thing was being characterized at all.

  (Quote)

Reginald Selkirk January 17, 2011 at 3:48 pm

Thomas: And thus, like a true eliminativist, we have “solved” the hard problem of phenomenal qualia just by ignoring it.

Yes, in pretty nearly the same way we now ignore the ether, and phlogiston, and caloric, and epicycles. If science has a better way to proceed, the ideas of the past get shunted aside. This is entirely a separate question from proving the old ideas false or incoherent, they are unproductive. If we can demonstrate that the new ideas do an adequate job of demonstrating everything that is demonstrable, then Occam’s razor is going to be used and eventually you will find yourself needing to justify a place for the old concepts.

  (Quote)

Zeb January 17, 2011 at 3:54 pm

Reginald,

Isn’t the middle example a category error? After all, you don’t think a quale, you experience it.

I mean that at least at the moment and perhaps forever, we can’t program computers to think about qualia because we can’t define them (the qualia) in mathematical terms. However we can think about qualia because we know them experientially. I don’t think there could ever be a way to know if computers experience qualia, but I would not expect them to.

Your statement about free will seems to make my point – we obviously can and do think about free will, but it is not (and perhaps cannot be) defined in a quantitative way such that we could program computers to think about it.

  (Quote)

sqeecoo January 17, 2011 at 3:57 pm

Luke,

Well, I agree that philosophers often quibble endlessly about the meaning of words and issues related to their meaning. That is futile. We can agree that it’s impossible to say precisely what you mean – simply because every definition uses terms that are themselves in need of definition, creating an infinite regress.

However, this is not all philosophy does. Some very concrete questions can be discussed. The application of Bayes’ theorem to scientific method, for instance. As a critical rationalist (i.e. Popperian) I for one disagree with your position on its viability for assessing evidence (interestingly enough, giving evidence faces the same unsolvable problem of infinite regress as giving definitions) .

Philosophy is where the method of science is discussed.

Even your claim that “If philosophers want facts about the world or even facts about values, the way forward is to figure out how to hand over such questions to the scientists” is really a philosophical position.

Of course, all this depends on what you mean by “philosophy” :P

If you use the word in a narrow enough sense, meaning “abstract speculation and defining terms”, and the word “science” in a wide enough sense to include discussing the nature and methods of rationality and morality, then of course I agree with you almost completely. We needn’t quibble about words, as long as we understand each other.

  (Quote)

Reginald Selkirk January 17, 2011 at 4:04 pm

Zeb: I mean that at least at the moment and perhaps forever, we can’t program computers to think about qualia because we can’t define them (the qualia) in mathematical terms. However we can think about qualia because we know them experientially. I don’t think there could ever be a way to know if computers experience qualia, but I would not expect them to.

Sorry, i still see a confusion on your part between thinking and experiencing. Computers certainly not experience the same qualia we do (although see my question about whether they experience their own set of qualia), but I don’t see any erason why they couldn’t think about qualia.

Your statement about free will seems to make my point – we obviously can and do think about free will, but it is not (and perhaps cannot be) defined in a quantitative way such that we could program computers to think about it.

Our own brains have done such a poor job of it, I certainly hope we won’t want computers to mimic us precisely in this regard. Contra-causal free will really is a simple question: if you are a naturalist, the notion that there is some part of our minds which is outside of the natural world is absurd. The conclusion is clear, it is not a good advertisement for human thinking that so many people have trouble accepting that result. So what they do instead is redefine free will, because after all, they are sure that they have it, so it can’t be something like contra-causal free will which is incoherent. Instead of properly rejecting it, they redefine it instead. I should hope an intelligent computer should see this for the cowardly dodge that it is.

  (Quote)

Zeb January 17, 2011 at 4:08 pm

Woodchuck, I frankly don’t understand why compatibilists use the term “free will”. Certainly I don’t understand the subtleties of compatiblism, but I don’t see how “free” or “will” are meaningful under determinism. I am a libertarian in some sense. Your vaguely disparaging dismissal suggests that you agree that libertarian free will cannot be defined mathematically, and so I would ask not how computers could have it, but how could they even think about it?

I challenge the assertion, made at least in the OP if not implied by several commentors, that a concept that is not (or cannot be) defined mathematically is therefore imprecise and ill-defined. What is lacking in the definition of free will? It is the ability of a person to make an undetermined choice.

  (Quote)

Reginald Selkirk January 17, 2011 at 4:31 pm

Zeb: … and so I would ask not how computers could have it, but how could they even think about it?

I ripped this out of context because it is relevant to a question I posed earlier. I cannot experience computer qualia (e.g. the experience of having FF0000 in my visual input buffer), and yet I can think about it and talk about it. I can even think about and talk about Zeb qualia, even though i cannot experience what it is to be Zeb seeing red. Qualia cannot be shared, so my presumption that Zeb seeing red is pretty much like me seeing red is not demonstrable.

  (Quote)

Zeb January 17, 2011 at 4:37 pm

Reginald, I’m not saying computers could not think about qualia, I’m saying that we certainly can’t now program them to think about it because we can’t define it mathematically, and it’s possible that we’ll never be able to do so. If there is a way for computers to think about these kinds of topics, I expect it would develop for the AIs in the same way it did for us: an undirected evolution in an ecosystem of overlapping processes. My whole point was to challenge the implication of Luke’s post that any concept that is not amenable to being handed over to computer programmers is not worth thinking about. He presents this dichotomy, which I think is false:

The goal of philosophy should not be to continue to give vague and mysterious answers to difficult questions. The goal of philosophy should be to figure out how to hand over its factual questions to scientists, and its conceptual questions to computer programmers, so that these questions can be answered.

I don’t want philosophers to give vague and mysterious answers, nor do I want to hand all thought over to scientists and computer programmers. Even if we can never hand of the questions of qualia to computer programmers; even if we can never answer them in a way that publicly closes the questions, I say we should still think about them. We can at least clarify the questions, catalog the potential answers, and present to each person an opportunity to choose which answers if any they will incorporate into their lives. And while trying to translate the questions to computer programming is surely a great exercise that might yield fruit, I don’t think it should be the ultimate goal of philosophy, and meeting that goal should not be seen as the end of the question.

  (Quote)

Zeb January 17, 2011 at 4:48 pm

I cannot experience computer qualia (e.g. the experience of having FF0000 in my visual input buffer), and yet I can think about it and talk about it.

What do you mean “think about it”? You put it into words, and knock those words around, but what are they referring to in your mind? I cannot think about computer qualia, and I agree that I cannot think about your qualia unless your qualia are the same as mine.

  (Quote)

Tony Hoffman January 17, 2011 at 5:20 pm

Zeb: Reginald, I’m not saying computers could not think about qualia, I’m saying that we certainly can’t now program them to think about it because we can’t define it mathematically, and it’s possible that we’ll never be able to do so. If there is a way for computers to think about these kinds of topics, I expect it would develop for the AIs in the same way it did for us: an undirected evolution in an ecosystem of overlapping processes

“It’s possible we’ll never be able to do so,” is a vapid expression. You need to explain how it is impossible, otherwise you just appear frightened that something might become so.

Why in the world would you expect AI’s to evolve (without direction) the answer to a problem when the whole point of designing them with intelligence is precisely to avoid that cumbersome process?

Zeb: “My whole point was to challenge the implication of Luke’s post that any concept that is not amenable to being handed over to computer programmers is not worth thinking about.”

Yeah, I think you maybe misunderstand the point, which is that philosophers, in order to be productive, should strive to adopt a rigor and precision that those who work with computers have learned to adopt. The hard questions, the ones we all wonder about, might well be better served if this were so.

I agree, though, that the implication could be that fuzzy questions are not worthy of contemplation. But an interesting benefit of this approach is to ask philosophers  of things like qualia to try, once  again, to explain exactly what the problem is. Their inability to find a resolvable question (to an AI or our own intelligence) might just as well reflect poorly on the endeavor as you seem to think it does on those of us who ask for more clarification. 

  (Quote)

Luke Muehlhauser January 17, 2011 at 5:49 pm

Ex Hypothesi,

No, but it can approach it’s own death asymptotically. :)

  (Quote)

woodchuck64 January 17, 2011 at 5:50 pm

Zeb,

What is lacking in the [libertarian] definition of free will? It is the ability of a person to make an undetermined choice.

A person has desires and he chooses according to the strongest desires he has at the moment of decision. Desires determine choice. However, desires are ultimately beyond a person’s control to choose or change without a preexisting desire to do so. To claim that a person can choose in spite of his desires is to claim that a person can choose against his own being, which lacks coherency to me.

  (Quote)

Luke Muehlhauser January 17, 2011 at 5:51 pm

Ex Hypothesi,

I’ve read large sections of SSR, though modern textbooks on the subject are more useful.

Science is a social process, and highly flawed. Look at medical science these days, for Pete’s sake! But it remains the best tool we have. The fact that it works better than anything else we know is not threatened by a clearer understanding of how it gets done.

  (Quote)

Leon January 17, 2011 at 7:27 pm

You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!

Write a program that, taking as its input any another program, outputs “HALT” if that program halts, or “LOOP” it if loops forever.

Luke, I think you’re conflating two goals without being explicit about their differences: getting answers that work, which you argue is the domain of science; and being precise, which you aim is the domain of maths and theoretical computer science. The problem is that it’s often difficult or impossible to push in both directions at once.

Science is often extremely imprecise, yet “works”. Just consider pretty much any science other than physics before the 20th century. The use of statistics in e.g. medical science is a sign of imprecision that we’re trying to deal with as best we can, not an indication that all our distinctions are mathematically crisp.

Similarly, maths and computer science are often extremely precise and insightful, but don’t provide “working” answers. For example, these responses:

What does it mean, precisely, to say that one hypothesis is simpler than another? The answer (lower Kolmogorov complexity) came not from philosophy, but from computer science. What does it mean, precisely, to proportion one’s beliefs to the evidence? The answer (Bayes’ Rule) came not from philosophy but from mathematics, and especially from implementations of Bayes’ Rule in AI (Bayesian networks).

… describe ideal reasoning machines, rather than flesh-and-blood human beings. To get from these answers to heuristics useful for real people requires condensing them into e.g. vague quotable quotes (“rationality quotes of the month”) and warnings about cognitive bias that in many cases existed well before the maths was so comprehensively developed. We need to trade off precision for workability.

Similarly, business, politics, ethics, and spirituality/religion — all areas touched on by traditional philosophy — are notoriously difficult to discuss precisely. But they are areas where people want or even need responses that “work”, even if they don’t work perfectly, or it’s very difficult to weigh up competing theories.

  (Quote)

Zeb January 17, 2011 at 7:47 pm

woodchuck,

To claim that a person can choose in spite of his desires is to claim that a person can choose against his own being, which lacks coherency to me. 

I’m not sure I agree with that in principal, but it does make sense and it fits with my experience and the version of libertarianism I hold. I do not hold a radical form of libertarianism, where a person can choose literally anything, utterly independently of any prior facts. I am only claiming that at the moment of decision I can choose freely from among a set of options present to me. These options may all accord with respective desires, and those desires may all be physical processes formed mechanically/deterministically, so that any choice I make is in accord with a deterministic desire. All I am saying is that I am free to choose from among a [constrained] set of options, be they desires or whatever else. Why do you think a person must always and only choose in accord with his strongest desire?

  (Quote)

JS Allen January 17, 2011 at 8:24 pm

I understand the criticisms, but if someone is already committed to naturalism, this is a pretty charitable attitude toward philosophy. At least it’s portraying philosophy as informing and motivating science, which is vastly better than the antagonistic “Philosophy has done nothing but fail; naturalism nothing but succeed” attitude one sometimes encounters.

Robin Hanson made a brilliant point about the necessity for academics (including philosophers) just the other day.

  (Quote)

MarkD January 17, 2011 at 11:07 pm

There is a class of procedures for which the output is not predictable without running the procedure itself. Many simple recursive programs demonstrate that property. Some of the more difficult aspects of human cognition like creativity and abduction may be more like those procedures than like the methods carved out in Good Old Fashioned AI (GOFAI). Likewise, it may be possible that there is no compact philosophical explanation or logical procedure for certain aspects of complex system functionality, in which case the argument becomes one of whether data-driven science or simulation will kill both theoretical science and philosophy.

  (Quote)

David January 18, 2011 at 3:56 am

Luke, logical positivism is dead. You have to understand that. Really!

  (Quote)

Jordan Peacock January 18, 2011 at 5:47 am

I think that determining questions, which are subsequently handed off to science, is one major function of philosophy; and this will continue to deplete itself.

However, there are other questions in philosophy, and they have somewhat longer half lives. The three that came to mind last night were cross-disciplinary synthesis, critiquing methodology and building coherence frameworks. Each of these has their counterparts elsewhere, and perhaps ultimately philosophy becomes a cultural and scientific appendage, but the need for a rigorous philosophy is unlikely to ever disappear entirely.

  (Quote)

Reginald Selkirk January 18, 2011 at 7:17 am

What is lacking in the [libertarian] definition of free will? It is the ability of a person to make an undetermined choice.

Libertarian free will means that our choices are free from the determination or constraints of human nature and free from any predetermination by God.

Is anyone familiar with the Theopedia site? Is it any good? Because if that is an accurate description of Libertarian free will, then Libertarian free will is a joke that I can’t take seriously. Anyone who considers themself to be free of the constraints of human nature is seriously deluded.

  (Quote)

PDH January 18, 2011 at 7:35 am

Isn’t there a distinction between Libertarian and Contra-causal free-will?

  (Quote)

Steven R. January 18, 2011 at 9:04 am

woodchuck,
I’m not sure I agree with that in principal, but it does make sense and it fits with my experience and the version of libertarianism I hold. I do not hold a radical form of libertarianism, where a person can choose literally anything, utterly independently of any prior facts. I am only claiming that at the moment of decision I can choose freely from among a set of options present to me. These options may all accord with respective desires, and those desires may all be physical processes formed mechanically/deterministically, so that any choice I make is in accord with a deterministic desire. All I am saying is that I am free to choose from among a [constrained] set of options, be they desires or whatever else. Why do you think a person must always and only choose in accord with his strongest desire?  

That sounds like compatibilism to me.

Now, the reason people say “strongest desire” is because it is the option that you are most motivated to chose. After all, if it isn’t desires that guide your decision making process then what is? I really don’t understand this “Free Will” entity that makes a decision without using desires as a basis of establishing which decision may be best to take. I mean really, what is the basis of its decision making process? If not desire, then it seems it’s just random, but then, that isn’t very “free” is it? At any rate, I think that the idea of making decisions that aren’t based on your strongest desire involve an entity that defies your very own self–an entity without an identity (rhyme unintentional).

@ Reginald:

LOL, I’m 99.99% sure that’s NOT what Libertarian Free Will means.

  (Quote)

Kip January 18, 2011 at 9:11 am

Luke: I had written a partial opposing view to what you had written… but then I realized it depends on your definition of “philosophy”. I think “philosophy” also includes logic, the study and practice of using deductive and inductive reasoning, and epistemological justifications for science. So… those things we can’t kill.

  (Quote)

Reginald Selkirk January 18, 2011 at 9:14 am

OK, I had not run across Theopedia before.

Wikipediasez

Free will is the putative ability of agents to make choices free from certain kinds of constraints. Historically, the constraint of dominant concern has been the metaphysical constraint of determinism. The opposing positions within that debate are metaphysical libertarianism, the claim that determinism is false and thus that free will exists (or is at least possible); and hard determinism, the claim that determinism is true and thus that free will does not exist.

OK then, I reject metaphysical libertarianism. That was easy.

This sounds interesting:

Another view is that of hard incompatibilism, which states that free will is incompatible with both determinism and indeterminism. This view is defended by Derk Pereboom.

  (Quote)

Luke Muehlhauser January 18, 2011 at 9:23 am

David,

Oh, indeed. I’m not a logical positivist.

  (Quote)

woodchuck64 January 18, 2011 at 12:39 pm

Zeb,

I am only claiming that at the moment of decision I can choose freely from among a set of options present to me. These options may all accord with respective desires, and those desires may all be physical processes formed mechanically/deterministically, so that any choice I make is in accord with a deterministic desire.

I agree with Steven R. that up to there this sounds like compatibilism. The only free will worth having is the freedom to do what you really want to do; not being coerced by someone else into doing something you don’t want to do.

All I am saying is that I am free to choose from among a [constrained] set of options, be they desires or whatever else. Why do you think a person must always and only choose in accord with his strongest desire

I don’t know what it means to choose based on something other than desires. I would have to have a desire to choose based on non-desires first, wouldn’t I? And that would collapse the whole effort. Or, as Steven R. suggested, a free will decision would be like being randomly possessed by an alien force that temporarily overrides one’s desires.

(Also I don’t want to imply that we act according to any one, strong desire necessarily but rather that we have many desires in potential conflict at any given moment and our decisions will be “made” by the desires that get the upper hand, whether those desires are altruistic, selfish or neither. Also, we are those desires, they are not external forces controlling us like puppets; rather, they define us as human beings. A critical part of what makes me “me” is what I want.)

  (Quote)

Zeb January 18, 2011 at 7:55 pm

Steven,

After all, if it isn’t desires that guide your decision making process then what is?

Nothing? Why must there be anything that guides my choosing? And if you are really talking determinism, then what separates the things-that-guide from the decision-making-process? I think your position is that desires are more than guides for the process; they are the determining causes that lead to an effect, not a choice.

Just to be sure this conversation is not about mismatched terms; when I talk about desires I am referring to an experienced sensation of wanting. Do you mean something different?

Now, the reason people say “strongest desire” is because it is the option that you are most motivated to chose.

That seems like a post-hoc definition; I may claim after a particular act that I freely chose to fulfill desire #1 from among a set of desires, and you can just say that desire #1 by definition was my strongest desire. But if it was not the desire that felt the strongest at the moment of decision, then I will feel justified in claiming that I chose freely, exercising my faculty of will.

Let me see if I can paraphrase the end part of your post: The only meaning for personal choice you can see is that it is the effects caused by desires. If any effects are not caused by desires they are random, and random effects cannot be considered personal choices. Also, a person’s desires, and particularly a person’s strongest desire, are a necessary part of the person’s identity. Therefore, any effect that is not caused by a person’s strongest desire is not caused by the person, but by some other entity which does not share the person’s identity. Is that a fair paraphrase?

Honestly I’m having a hard time understanding what you guys are having a hard time understanding. Let me say that I am not trying to argue that free will exists, I am only trying to first of all find out why so many people say the concept is meaningless of incoherent, and maybe second show that it is meaningful and coherent and thus could possibly exist. So, I understand that from a retrospective, statistical perspective, anything that is not determined is random. But from an in-the-moment, narrative perspective that is not the case. If someone fans out a deck of cards and tells me to pick one, once I have decided to follow my desire to pick a card (or my desire has caused me to commit to picking one, if you prefer), which desire will determine which of the cards I pick? Surely most people in that situation would report having no desire for any card in particular, yet they would report making a personal, intentional choice to pick care #34 or whichever. Perhaps their choice seems random, even to them, but it was a willful, intentional, personal choice nevertheless. Of course there could be all kinds of complex deterministic factors that caused it to be card #34 (though not desires I’d say), and in that real world case I’d bet there were such deterministic factors. But as an analogy I think it shows that the concept of a person choosing from among a set of options without desires to guide (force) them is a meaningful and coherent concept. Another analogy would be if I were stuck in a maze and I came to a Y, and down one lane there my favorite food was being served, and down the other my favorite movie was about to start playing, and while I desired both, neither desire felt stronger. I could and would choose to go down one lane. After the fact there would be no appearance of randomness – I chose what I chose because I desired it. But why did I choose one desire and not another to pursue? Again there could be complex unconscious determining factors, but it could also be that a person just has a faculty to choose freely from among a set of available options. In both cases, the cards and the maze, the set of options available were determined by the person’s past history, and the course they took would be explained during and after the choice by a teleology relating to the person’s future, so I don’t see what separates the freedom I propose from the person’s identity.

  (Quote)

Zeb January 18, 2011 at 8:11 pm

Woodchuck, I think most of my response to your last comment would be pretty much what I just wrote to Stephen, but I want to ask you about this:

Also, we are those desires, they are not external forces controlling us like puppets; rather, they define us as human beings. A critical part of what makes me “me” is what I want.

How do you come to that belief? My immediate experience is the opposite; desires to me seem external. I realize that “self” is an extremely contentious concept that is necessarily constructed, and you are free to construct it however you choose. But it seems your way would mean that if you desires change, you are no longer “you”. I do not observe it to be the case nor do I find reason to believe that when my desires change my identity does, and so I wonder if that’s what you mean, and if so why?

  (Quote)

Steven R. January 18, 2011 at 8:58 pm

Steven,Nothing? Why must there be anything that guides my choosing?

Because if you have no reason to do anything, then you have no reason to choose A over B and thus choice is just absolutely random. Your choice isn’t a reflection of you, but of chance.

And if you are really talking determinism, then what separates the things-that-guide from the decision-making-process? I think your position is that desires are more than guides for the process; they are the determining causes that lead to an effect, not a choice.

To be honest, I don’t really have much of a position on the matter, mostly because I have just skimmed summaries on the Free Will-Determinism-Incompatiblism-Compatibilism Debate, but, if I had to say anything, it’s that human choices are guided by three things: experience, brain structure, and evolutionary goals. Each gives a person a unique set of goals but, as you can note, none are actually chosen; I argue that this is absolutely necessary.

This is because to argue that some set of pre-existing conditions must exist for a choice to be made seems incoherent. My argument goes as follows:

The brain is what defines who someone is, as it is the one who analyzes and controls what the body does. Without the brain, the body can make no choice, and thus has no will (and thus, no free will). Thus, to say that free will exists outside of the boundaries set by the brain would create a paradox.

Next, consider the claim that Free Will must lie outside of experiences. Suppose we make a babie’s brain unable to see, touch, etc. and lose all of it’s memories. What would that baby think of? Quite simply, nothing. It would not be able to imagine colors, as the mind must first experience them to know about them. It would not be able to imagine anything sentient, as it’s brain would not have the knowledge to do so (if one has never felt something soft, one cannot imagine what softness would feel like). In fact, the brain would be unable to imagine anything at all, not even shapes, sounds, etc.

This too creates a paradox, as the mind, having no choices, would be unable to make any choices, something fundamental to Free Will, and the babie’s subjected to this, would be unable to ask to experience anything as it has not experienced anything. In other words, the brain would be more or less “dead”.

Lastly, what if the brain chooses its own attributes? But then, the brain would have no preference between being a cold-hearted killer or a charitable saint. It’s these pre-existing conditions, the determinism, that oddly enough, is what allows free will (here understood as “made or done freely or of one’s own accord”, and experiences and other pre-existing factors are what let us know what our own accord is). Without any of these factors, we really don’t have much to do and nothing to define us. I hope that, although lengthy, I made the point that I believe that pre-existing conditions are a must if Free Will is to make sense :\

Just to be sure this conversation is not about mismatched terms; when I talk about desires I am referring to an experienced sensation of wanting. Do you mean something different?

Hm…I’d say desires are the accumulation of brain structure, experiences and evolutionary goals, although one could argue that these are all tied down to mere brain structure, which, I suppose are what someone wants.

That seems like a post-hoc definition; I may claim after a particular act that I freely chose to fulfill desire #1 from among a set of desires, and you can just say that desire #1 by definition was my strongest desire. But if it was not the desire that felt the strongest at the moment of decision, then I will feel justified in claiming that I chose freely, exercising my faculty of will.

Yes, I do see your point and it’s a question that’s always bugged me. I suppose the only thing I can say here is:

1. If it wasn’t the thing you wanted to do most, why did you chose it? Chances are that you had some underlying motivations that you didn’t wish to accept to yourself or weren’t fully conscious about, hence why it seems it was “less desirable.”
2. Reason tells us that people will never choose anything they don’t want and which, according to their philosophy (here meaning world-view), experiences, etc. will always choose the most beneficial thing, even if it in reality it isn’t.

I know this isn’t wholly satisfying and I’ve probably meandered more than anything else…

Let me see if I can paraphrase the end part of your post: The only meaning for personal choice you can see is that it is the effects caused by desires. If any effects are not caused by desires they are random, and random effects cannot be considered personal choices. Also, a person’s desires, and particularly a person’s strongest desire, are a necessary part of the person’s identity. Therefore, any effect that is not caused by a person’s strongest desire is not caused by the person, but by some other entity which does not share the person’s identity. Is that a fair paraphrase?

Yeah. I just added in why I think that it’s also necessary up above, in case you’re skimming my post rather than reading it (don’t blame you)

Honestly I’m having a hard time understanding what you guys are having a hard time understanding. Let me say that I am not trying to argue that free will exists, I am only trying to first of all find out why so many people say the concept is meaningless of incoherent, and maybe second show that it is meaningful and coherent and thus could possibly exist. So, I understand that from a retrospective, statistical perspective, anything that is not determined is random. But from an in-the-moment, narrative perspective that is not the case. If someone fans out a deck of cards and tells me to pick one, once I have decided to follow my desire to pick a card (or my desire has caused me to commit to picking one, if you prefer), which desire will determine which of the cards I pick? Surely most people in that situation would report having no desire for any card in particular, yet they would report making a personal, intentional choice to pick care #34 or whichever. Perhaps their choice seems random, even to them, but it was a willful, intentional, personal choice nevertheless. Of course there could be all kinds of complex deterministic factors that caused it to be card #34 (though not desires I’d say), and in that real world case I’d bet there were such deterministic factors. But as an analogy I think it shows that the concept of a person choosing from among a set of options without desires to guide (force) them is a meaningfuland coherent concept.

No, I don’t think you’re analogy works. First, it just means that there isn’t much interest, but certainly that there isn’t much desire. People may not go for the cards at the very end of deck because that wastes too much energy–that’s a desire right there, to conserve energy. They may also not want to pick a card from the very front, as they’ve played “Old Maid” and they got the Joker by choosing easily picked cards, so, the desire (even if irrational) to avoid that “nasty” situation may make them move more to the side. So on, so forth. Merely because a person isn’t directly aware of all their desires doesn’t mean that they aren’t there.

Another analogy would be if I were stuck in a maze and I came to a Y, and down one lane theremy favorite food was being served, and down the other my favorite movie was about to start playing, and while I desired both, neither desire felt stronger. I could and would choose to go down one lane. After the fact there would be no appearance of randomness – I chose what I chose because I desired it. But why did I choose one desire and not another to pursue? Again there could be complex unconscious determining factors, but it could also be that a person just has a faculty to choose freely from among a set of available options.

I think you’re associating “Free Will” with “free from any kind of thought, conscious or otherwise”. I’m not convinced that such random, capricious decisions even exist (indeed, what decision isn’t guided by the brain?!) and that they’re a reflection of free will. Again, it seems random because, if indeed you had no particular preference and your brain gave it no thought, it’s like saying “but suppose I had a coin that didn’t have more weight in one side than the other (in context, no desire is stronger than the other) and then flip it and it isn’t affected by air resistance to favor on side, etc. Surely this isn’t chance that determines where the coin lands!”. That just doesn’t make much sense, now does it?

In both cases, the cards and the maze, the set of options available were determined by the person’s past history, and the course they took would be explained during and after the choice by a teleology relating to the person’s future, so I don’t see what separates the freedom I propose from the person’s identity.  

I don’t really understand how the choices, if indeed no desires or any thought guided them, can be understood as “free” since they merely reflect chance. Anyway, forgive my obscenely long post, but this is just an amateur’s perspective :P

  (Quote)

cl January 18, 2011 at 10:45 pm

Garren,

I suspect they may be doubting whether computers can ever have the internal experience of consciousness.

Which, to me, is simply the “problem of other minds” repackaged, which means… philosophy must resurrect itself!

  (Quote)

Reginald Selkirk January 19, 2011 at 6:42 am

woodchuck64: I don’t know what it means to choose based on something other than desires.

My desire is to fly, unassisted by technology. I will have to make my choice on this based not just on desire, but on reality.

  (Quote)

Reginald Selkirk January 19, 2011 at 6:44 am

Garren: I suspect they may be doubting whether computers can ever have the internal experience of consciousness.

I doubt whether they could make an argument for that that wasn’t arbitrary, question-begging or circular.

  (Quote)

Icebrand January 19, 2011 at 7:21 am

Many people seem to object to cryonics on the grounds that failure to be conscious for a sufficiently extended period of time would mean that a different person would exist in their place after reanimation. In other words rather than objecting purely on the statistical matter of whether it would work or not in terms of measurable preservation of memory and personality, they further question whether the person would really survive even after the event. I’m not sure this can be answered by any kind of empirical argument — yet it has obvious moral implications in terms of human lives that could be saved by cryonics.

That entire philosophical question is distinct from whether a complete organic copy of the individual is a valid continuation of their individual experience, which is in turn somewhat distinct from whether the same is true of a silicon or other non-biological analogue created from a brain scan, which differs in turn from the question of whether the same is true of a purely digital simulation.

Empirical science can tell us whether individuals act substantially the same and remember the same things, but the qualia questions strike me as difficult to answer by any kind of empirically based thought experiment. For all we know, the ordinary unconsciousness of sleep or the extreme unconsciousness of anesthesia and hypothermia, or even the mild unconsciousness of the attention wandering slightly, constitutes an interruption in whatever definition of personal identity you use. Memories and function are the only really measurable (and measurably relevant) form of continuity.

That said, I’d probably prefer to have my original organic brain repaired during reanimation than replaced by any of the above mechanisms. And if there were a way of avoiding cryostasis or sleep entirely, this would also be preferable.

  (Quote)

Steven R. January 19, 2011 at 8:02 am

woodchuck64: I don’t know what it means to choose based on something other than desires.My desire is to fly, unassisted by technology. I will have to make my choice on this based not just on desire, but on reality.  

Even then, isn’t that still based in desires? You have the desire to not attempt to fly without some mechanism because you know they will fail. You have the desire to conform to what we believe reality and it outweighs your desire to live a fantasy. You also have a desire to avoid potentially harmful situations. All in all, desires are affected by our knowledge of reality.

  (Quote)

woodchuck64 January 19, 2011 at 1:06 pm

Zeb,

My immediate experience is the opposite; desires to me seem external.

If you had no conscious desires, you would be effectively indistinguishable from someone in a coma. So clearly some intrinsic desires are necessary to be conscious human beings in the first place. And if some are, why not all (in the absence of a theoretical model for dualism)?

I would certainly agree that changing desires changes me as a person; I am nothing like the person I was 20 years ago, I barely know that person.

But I think we’re getting side-tracked. It doesn’t really matter whether your desires are located in your brain or somehow partitioned between the Good Nature and Bad Nature of the soul. It doesn’t really matter if you make decisions according to your desires or decisions according to the mysterious whims of the soul. The main point is that you make decisions because of who you are. The will is not free, it is chained to who you are. To be truly responsible for your decisions, you must not only be able to redefine yourself at any given moment –change your genetic code, erase memories and experiences, upgrade and overhaul your soul– you must have the will to do so in the first place. And as we know, if we really have the will to do the right thing, we do it. Where’s the freedom in that?

  (Quote)

enigMan January 31, 2011 at 7:39 am

The idea that philosophy should turn into science goes back at least to the positivists, but is refuted by good philosophy. Popper was a good philosopher and he rejected the idea of having to give definitions of key concepts, for example. Once we have lots of definitions, we have lots of answers, but we still have only one reality. Often it is a philosophical question, which are the best definitions. We cannot, for example, simply reduce philosophy to a mathematical question if there remains the philosophical question of what numbers are and what they should be. There have been answers, but the question of how good they are is philosophical. Standard mathematicians work within ZFC set theory, so their numbers are ZFC’s von Neumann ordinals, but they don’t give reasons for that choice. Were they to try to, they would be doing philosophy. And Popper would say that causation involves single-case propensities. Etc.

  (Quote)

Anonymous February 2, 2011 at 1:07 pm

As a practical matter, we’ll still need professional academic philosophers who write philosophy papers to other philosophers, trying to influence and eventually destroy the field from the inside. Yes, we could just ignore philosophers after they become irrelevant, but that wouldn’t be as satisfying as the complete abolishment of all philosophy departments, journals, and publishers. Right now, philosophy, as an academic field, is still alive and well, and perhaps stronger than ever. As with religion, intellectual death is not enough.

  (Quote)

Garren April 5, 2011 at 10:14 pm

I was reminded of this post when I listened to the last chapter of Russell’s The Problems of Philosophy:

‘as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science. The whole study of the heavens, which now belongs to astronomy, was once included in philosophy; Newton’s great work was called ‘the mathematical principles of natural philosophy’. Similarly, the study of the human mind, which was a part of philosophy, has now been separated from philosophy and has become the science of psychology. Thus, to a great extent, the uncertainty of philosophy is more apparent than real: those questions which are already capable of definite answers are placed in the sciences, while those only to which, at present, no definite answer can be given, remain to form the residue which is called philosophy. ‘

  (Quote)

Joe September 7, 2011 at 10:26 am

Hey, speaking of defining our terms, Harry Frankfurt usefully defined “bullshit” as “making claims about a subject when you know nothing about it”. If he’s right, then this post must be one of the finest examples of bullshit ever to grace the internet. Congratulations, that’s some pretty elite company, there!

  (Quote)

bpcookson September 9, 2011 at 1:01 pm

I really enjoyed your essay, but you’re just playing with words.

The goal of any endeavor is to offer meaning to a successor before personal expiration (i.e. life). Philosophy is just a link in a chain, and it’s arguably near the beginning of said chain. It never really dies, but rather builds a foundation for future efforts.

Also, the Philosophy vs. Time plot does not consider future discovery (posing new questions), which would be represented by a discontinuous spike that effectively prevents philosophy from ever “dying.” You may argue that your plot includes all questions, but can we really consider the sum of all philosophical questions to be finite?

  (Quote)

RC September 11, 2011 at 5:50 pm

Hi:

I know I’m a bit late to the game here, but I was discussing this article with a friend and I ended up typing out my thoughts on it. So, I know this will seem self-indulgent, but here is my critique. (I keep referring to LM using “he” and “him” instead of “you” because this was originally not written to be a comment here. But hey, I put in the 45 minutes of work, so there you go.)

First, he doesn’t really say what he means by philosophy. I would submit that philosophy is examining foundational questions not usually covered in specific sciences. So, by definition, certain questions seem beyond the scope of the sciences.

His mistakes seem to be making assumptions without arguing for them, or just being unclear about what he means. For example, he gives computer scientists a big sloppy kiss for making concepts precise enough to “teach to a computer.” Well, gee, what does it mean to teach a computer something? Does increased syntactic manipulation equal learning? I would argue you aren’t teaching the computer anything; a series of syntactic maneuvers, no matter how complex, does not equal cognition. Perhaps Luke thinks (he seems to strongly imply) that we can program a computer to do anything. This is a bold claim; he does not argue for it or defend it. He simply assumes it. (Think about computational representations of phenomenal states, intentional states, etc.)

He gets a lot of his history wrong: people haven’t stopped philosophizing about “the heavens”–people still argue about the cause of the universe (assuming it had one) and its significance. Many of these questions are philosophical in nature. Hell, even Hawking’s latest book, despite eschewing philosophy, ended up just being really bad philosophy. Furthermore, people haven’t stopped philosophizing about biology–there are people who have built their entire reputation on doing philosophy of biology. (And perhaps a minor point, but one of the issues of biology is that it is notoriously difficult to test biological theories against experience, particularly when one considers the experimental capabilities of other fields.) Finally, it seems that philosophy hasn’t shrunk in scope, but rather in depth (if that makes any sense). That is, pretty much every area of inquiry has somebody doing philosophy about it. To say that it has retreated is a misrepresentation.

When he discusses clarifying concepts, he says that perhaps computer scientists will do a better job of it. Maybe, but it seems like they are still doing philosophy. That’s probably the majority of what philosophy is! There is some concept x we are unclear on, so we think about it until we get clearer. If CS folks do this, good on them. But they are wearing philosopher hats to do it. Furthermore, once they have clarified a concept and tried to represent it using 0′s and 1′s, there is obviously a normative question we must ask: is what the computer does an instance of concept x? And these kinds of questions seem deeply philosophical as well. (Think of a Turing test. It may be an empirical matter to determine whether we can program a computer to pass it, but it is a philosophical question whether passing a Turing test counts as *real* AI. In Turing’s original paper, he says we should just replace the question of “Can a machine think?” with “Can it pass a Turing test?” But we need philosophical argument to determine whether that replacement is a good one.)

His claims about handing philosophy of mind over to cognitive scientists and AI researchers just seems bizarre. It assumes that minds are nothing but brains (philosophical question) and that AI is possible (philosophical question).

His claims about ethics rest on similar assumptions. He claims we must figure out what “in nature” corresponds to ethics, or else treat ethical statements as empty. But this assumes a view about ethics that he doesn’t argue for. Furthermore, it seems like a philosophical question to ask what he even means by the claim that “moral claims” must “refer to” something in “the natural world.” Each of the quoted phrases is philosophically ripe for investigation. (Well, until the computer scientists figure it all out for us.) He just ignores the philosophical issues here.

I know this is probably starting to sound a bit repetitive, but his claims about epistemology also make fundamental assumptions. Epistemology makes evaluative claims (i.e., words like “know,” “justification,” and “reliable” are making claims about what is epistemically good). He assumes we can naturalize all of this away. I’m not exactly sure how one hands the entire discipline over to scientists, but it would involve (at the least) replacing these evaluative terms with naturalistic analogues that cognitive scientists could use. One could do this (and I submit that the result would be a mess), but one must argue for its possibility. He just assumes it.

Finally, he fails to consider that the value of the sciences (and what counts as a science) is a question we must turn to philosophy to answer. Physics cannot tell us why physics yields knowledge; only philosophy can do that. There’s a great motto that sums up the point.

“Philosophy: Try to say why it’s pointless and we’ve already got you.”

Best,
RC

  (Quote)

Tim September 12, 2011 at 11:21 am

I was going to write a long post in response, but I see that the last commenter already pretty much nailed it: lots of poorly defined terms, poor history, and unargued for assumptions.

One thing I will add is that the OP also gets the current state of philosophy wrong. Philosophy has not “retreated” anywhere. Today it’s just as involved, if not more so, in the specialized sciences than it has ever been. There are proportionally more philosophers today, producing more philosophy, than has probably ever been produced in the history of humankind. And alot of it is stellar stuff!

  (Quote)

Nick September 12, 2011 at 7:17 pm

Garren nailed it pretty well when he inserted the quote from Russell. Naturalistically inclined philosophers largely agree that much of the current domain of philosophy can and should be ceded to the scientists. But, Ethics is different. Luke’s program for naturalising Ethics seems to rest on a semantic analysis of ethical claims combinined with a cognitive science account of the origin of value judgements (maybe underpinned by neuroscience, evolutionary psychology etc). That program is already well underway at the intersection of science and philosophy. But to imagine that the this will somehow result in the emergence of a normative (as opposed to descriptive) ‘science’ of ethics, is a confusion. Not an uncommon confusion; Sam Harris has made similar claims. No amount of science will ever be able to tell us what we should choose, as opposed to explaining and even predicting our actual choices. That’s Hume’s guillotine, and there is no escape from it.

ps. A good account of the intersection between science and ethics can be found in Richard Joyce’s ‘The Evolution of Morality’ (Bradford Books, MIT, 2000)

  (Quote)

Marc September 12, 2011 at 8:10 pm

Whenever I read stuff like this, I always think two things: first, I think it’s great, critical thinking, being enlightened to various ways of thinking about a particular subject. The other thing I always think of is this thought from Sartre (from his war diaries? I forget): “I have seen children die of hunger. In front of a dying child, Nausea has no weight.” Which thought should I ignore? Enlighten me, but don’t be a fuckin dick about it.

  (Quote)

Kip September 13, 2011 at 6:10 am

No amount of science will ever be able to tell us what we should choose, as opposed to explaining and even predicting our actual choices.

If you define “should” to mean “that for which I have the most and strongest reasons for action to do” (or something like that), then science can take it from there. I would argue that defining “should” this way is the only way that really matters (because by definition, if it matters, then I have good reasons to do it). You can define “should” some other way that science then can’t do anything with, but neither can anybody else (except talk about some vague notion without really getting anywhere).

  (Quote)

NChen September 13, 2011 at 8:44 pm

This is so idiotic its not worth a full response. To demolish just one (major) aspect of it, I offer this to the stupid assertion made in the blog “On the other hand, artificial intelligence (AI) researchers and other computer scientists have to figure out how to teach these concepts to a computer, so they must be 100% precise.”

Many things in AI and computer science are not “100%” precise.Do you know what “100% precise” even means? That doesn’t even make any sense. Do you know what is not imprecise and cannot ever be completely precise in computer science? How about effective computation, the notion to which all theoretical computer science is built and which can never be made precise? That is why the church-turing thesis cannot be formally proven only assumed true.

http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

Even mathematics can never be completely precise. The axioms for which mathematical universes are built on are inherently ambiguous.

Imprecision is a state of life. You can’t get a way from it. You can’t get away from philosophy. It is foundational on everything.

  (Quote)

NChen September 13, 2011 at 8:50 pm

And yes, philosophers have made many of the “imprecise’ notions you talked about quite precise: just a few examples, out, should,

http://plato.stanford.edu/entries/logic-deontic/

“vague and mysterious answers” = answers you do not have the ability to understand.

  (Quote)

tawny veda September 13, 2011 at 8:57 pm

This is so idiotic its not worth a full response. To demolish just one (major) aspect of it, I offer this to the stupid assertion made in the blog

Follow the related posts and the threads at LessWrong pertaining to this topic.

You will find that you will be schooled. You should prepare by understanding Bayes and Yudkowsky.

  (Quote)

NChen September 13, 2011 at 9:35 pm

“Follow the related posts and the threads at LessWrong pertaining to this topic.”

It’s often the case that those who are too daft to answer any of the devastating objections will defer the points made to some irrelevant third party. Schooled indeed.

  (Quote)

NChen September 13, 2011 at 9:39 pm

Yes, I understand Bayes. It’s a shame that those who don’t will often robotically differ to others who don’t either but speak on it as if they do. It’s like Wittgenstein’s idiot who bought two copies of the same newspaper to verify each other.

  (Quote)

NChen September 13, 2011 at 9:41 pm

BTW, “tawny veda” have you come up with a precise proof of “effective computation” yet? What is the “100% precise” definition of effective computation, BTW?

  (Quote)

Zeb September 14, 2011 at 6:13 am

Kip

If you define “should” to mean “that for which I have the most and strongest reasons for action to do” (or something like that), then science can take it from there.

When is history has “should” ever been use that way? It seems like you are redefining a word in order to claim that the concept is open to science. Sure, the concept you label as “should” may be open to science, but not the concept that has been conventionally labeled with “should” by most English speakers. As far as I know philosophy has yet to even clarify what exactly that concept is. Which is to say, there is still work to do in philosophy.

  (Quote)

Kip September 14, 2011 at 6:38 am

Zeb — I think that’s how I use the term, and I think when you boil down the inter–subjective/objective parts of how others use the term, then that’s basically what you get (or something similar to it). It’s a bit like trying to define what people mean by “health”, I suppose. It means different things to different people, but we can give it an inter–subjective/objective meaning that’s close enough to what people mean so that science can try to work at it.

  (Quote)

Kip September 14, 2011 at 6:40 am

And regardless if most people use the term that way, we still have many and strong reasons to have science work on the referent of that definition — perhaps while philosophers are arguing on what the correct definition of “should” should be.

  (Quote)

Zeb September 14, 2011 at 5:00 pm

Kip, I agree that we have good reason to do scientific investigation into finding out what actions we as a community and as individuals have many and strong reasons to undertake. But answer me this – do find the question, “Should a person do that which he has the most and strongest reasons to do?” a nonsense question because of redundancy? Or would you give an argument as to why the answer is yes? I, and I believe most people, would consider it an open question.

Anyway I did not think desirism would even say that a person [morally] should do that which he has the most and strongest reasons to do. A person with overwhelmingly desire-thwarting (evil) desires morally should not do that which he has the most and strongest reasons to do. He should do that which a person with good desires and true beliefs would do. Maybe he can’t, probably he won’t, but morally he should. Is that a wrong understanding of desirism?

  (Quote)

Kip September 14, 2011 at 7:20 pm

But answer me this – do find the question, “Should a person do that which he has the most and strongest reasons to do?” a nonsense question because of redundancy?

I think it’s redundant. It’s only an open question because people aren’t sure what “should” means. If they knew what it meant, it would be an easy answer: yes or no. I think defining it in such a way that the answer is easily “yes” makes the most prudential sense.

I’m not even talking about desirism right now. I’m just talking about what is prudent/practical.

  (Quote)

Leave a Comment

{ 1 trackback }