Reading Yudkowsky, part 13

by Luke Muehlhauser on February 9, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 77th post is Science as Attire. An X-Men movie is not “science fiction” in the sense of telling a story with anything like science. It is, rather, “fiction with scientificky words.” Just a beliefs can be “attire” we wear to fit into a certain group, science can be worn as attire without much understanding or use of real science.

Probably an actual majority of the people who believe in evolution use the phrase “because of evolution” because they want to be part of the scientific in-crowd - belief as scientific attire, like wearing a lab coat.

Fake Causality explores fake explanations like phlogiston, which could not be used to make specific predictions about anticipated experience.

Semantic Stopsigns examines more fake explanations, starting with First Cause:

[My purpose is] to ask why anyone would think “God!” could resolve the paradox [of first cause].  Saying “God!” is a way of belonging to a tribe, which gives people a motive to say it as often as possible – some people even say it for questions like “Why did this hurricane strike New Orleans?”  Even so, you’d hope people would notice that on the particular puzzle of the First Cause, saying “God!” doesn’t help.  It doesn’t make the paradox seem any less paradoxical even if true. How could anyone not notice this?

Jonathan Wallace suggested that “God!” functions as a semantic stopsign – that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point.  Saying “God!” doesn’t so much resolve the paradox, as put up a cognitive traffic signal to halt the obvious continuation of the question-and-answer chain.

Of course you’d never do that, being a good and proper atheist, right?  But “God!” isn’t the only semantic stopsign, just the obvious first example.

Secular semantic stopsigns might include: “Liberal democracy!” or “Corporations!” Of course, we can’t identify a semantic stopsign merely by the word. We have to look at it’s use. “What distinguishes a semantic stopsign is failure to consider the obvious next question.

All this talk of fake explanations comes to a kind of head with Mysterious Answers to Mysterious Questions.

Consider the old theory of vitalism:

Calling “elan vital” an explanation, even a fake explanation like phlogiston, is probably giving it too much credit.  It functioned primarily as a curiosity-stopper.  You said “Why?” and the answer was “Elan vital!”

When you say “Elan vital!”, it feels like you know why your hand moves.  You have a little causal diagram in your head that says ["Elan vital!"] -> [hand moves].  But actually you know nothing you didn’t know before. You don’t know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won’t be able to predict it in advance.  Your curiosity feels sated, but it hasn’t been fed…

But the greater lesson lies in the vitalists’ reverence for the elan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking…

But ignorance exists in the map, not in the territory.  If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person.  There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance…

These are the signs of mysterious answers to mysterious questions:

  • First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
  • Second, the hypothesis has no moving parts – the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to cause this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity.
  • Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
  • Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.

Nice. But what’s an example of a current mysterious answer to mysterious questions?

Yudkowsky nominates “emergence” in The Futility of Emergence.

[Wikipedia defines emergence as] “The way complex systems and patterns arise out of a multiplicity of relatively simple interactions”.  Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem.  Imagine pointing to a market crash and saying “It’s not a quark!”  Does that feel like an explanation?  No?  Then neither should saying “It’s an emergent phenomenon!”

…I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”?  You can make no new predictions.  You do not know anything about the behavior of real-world minds that you did not know before.  It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed.  The hypothesis has no moving parts – there’s no detailed internal model to manipulate.  Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane.

I see his point, but Yudkowsky clearly does not spend much time speaking with dualists. (Oh, how I envy him!) To dualists – even to property dualists like David Chalmers – it’s not obvious that everything is an emergent phenomenon, supervening on the elements of fundamental physics. So when I tell a Christian that consciousness is an emergent phenomenon, that claim does control my anticipated experience. For example, it predicts that we will find bridge laws between what we experience as “consciousness” and physical brain states, which will themselves have bridge laws down to chemistry. My claim that consciousness is an emergent phenomenon also tells me what experiences I do not anticipate. I do not anticipate, for example, that consciousness will be found to operate according to laws that function wholly apart form the laws of physics, as David Chalmers has proposed.

Of course, “emergence” is still not much of an explanation. I don’t pretend to have explained consciousness when I say consciousness is an emergent phenomenon, and I don’t know anyone who does. Apparently Yudkowsky knows some such people, but I would like to see him quote them.

Moreover, an even more common use of the term the term “emergent” is this: To call something “emergent” is to say that it does not reduce to more fundamental properties. People who use this idea of “emergent” to say that intelligence is an emergent phenomenon are predicting something. They are predicting that we will not find reductionistic bridge laws for all causes between (for example) intelligence and cell biology. Rather, they are predicting we will find a new kind of fundamental cause at (for example) the level of intelligence. So that use of the term “emergent,” while not intended as an explanation, does provide anticipated experiences we could (in principle) test. (If you’re confused by this claim, you’re not alone.)

So there you go. I have lots of problems with Yudkowsky’s post on emergence.

Previous post:

Next post:

{ 15 comments… read them below or add one }

Paul Crowley February 9, 2011 at 4:18 am

I think ESR’s comments on some EY:OB/LW posts is relevant to your remarks on emergence here.

  (Quote)

Luke Muehlhauser February 9, 2011 at 5:13 am

Paul Crowley,

Thanks for the link!

  (Quote)

Eneasz February 9, 2011 at 9:48 am

There are people who simply use “emergence” as a mental “goddidit”, I used to be one of them. It’s easy because emergence is complicated. The most over-used example is gliders in Conway’s Game of Life. A fascinating emergent phenomenon. They aren’t actual “things”, but can be treated as such with explicit rules that govern their behavior. And you can reduce them to an explanation of how the individual cells interact using simple rules to propagate a pattern. This is real emergence. But it’s incredibly hard to mentally hold both the base-level reality and the emergent thing at the same time, so you accept “it’s an emergent phenomenon” and just work on the higher-level things. (By “you” I mean “me”)

Later someone presents you with a breeder that leaves glider-guns in it’s wake, and you simply accept their explanation of “it’s emergence” without digging deeper into it. And from there it’s really easy to jump to “mind is an emergent phenomenon of interacting neurons, cuz neurons are a lot of Conway’s Game of Life cells” without ACTUALLY knowing if this is true. And from there thinking “well, if we can just get a large enough Game of Life going, we’d be able to create another intelligent being via the magic of emergence!”

And yes, this happens. As one example: Robert J. Sawyer’s TERRIBLE book “WWW:Wake” has as its premise that internet data packets simulate a giant Game of Life, and that the internet “wakes up” once it becomes big enough because those packets created a Game of Life big enough to spontaneously create a human-level intelligence. Via “emergence”. Seriously. (That’s not the only bad part, the entire book is terrible and every character feels like an aging Canadian man regardless of who they’re supposed to be. Even the newly-born AI. It’s really a terrible book, I regret those hours.) But the guy is well known, the book was published, and he’s won a Hugo in the past, so this isn’t some random crank, it’s a representation of how magical thinking can and does infiltrate into conventional thought. Through masking words like “emergence”.

  (Quote)

James gradstu(pid) February 9, 2011 at 11:08 am

Suggestion : Step one, read works in history and philosophy of science. Places to start are
Kyle Stanford, Exceeding Our Grasp: Science, History, and the Problem of Unconcieved Alternatives, online here: http://www.lps.uci.edu/stanford/publications/StanfordExceedingGrasp.pdf
Larry Laudan , Progress and Its Problems
Larry Sklar, Space, Time, and Spacetime
You will find issues of the nature of explanation in the sciences (and whether past posits/theories like phlogiston or vitalism had explanatory value) in reality much more complicated than they are made out to be on blogs, such as Yudkowsky’s.
Step two, write some blog posts, but please, only after step one.

  (Quote)

Chris Hallquist February 9, 2011 at 11:50 am

I suspect a majority of the people who use the word “emergence” don’t know what they’re trying to say by it. They just know “reductionism” is a “bad word” (like “atheism”), and they want some alternative to “reductionism.” If they’re also afraid of sounding woo-woo, “emergence” does the trick because it sounds scientific.

  (Quote)

Luke Muehlhauser February 9, 2011 at 12:32 pm

James gradstu(pid),

I have no doubt of that!

  (Quote)

Luke Muehlhauser February 9, 2011 at 12:33 pm

Chris Hallquist,

That could be!

  (Quote)

Tarun February 9, 2011 at 6:17 pm

I think the best way to make sense of claims of emergence, at least in the physics literature, is that the macroscopic behavior of a system is insensitive to microscopic details. This is usually due to microscopic degrees of freedom dropping out when a system is renormalized.

There is an explicit prediction being made here: systems with significantly different microscopic compositions will exhibit similar macroscopic behavior. For instance, the water/vapor and paramagnetism/ferromagnetism phase transitions are similar in many interesting ways (e.g. their critical exponents are the same). Giving a reductionist explanation of the water/vapor transition is fine as far as it goes, but it leaves out an important fact: the similarity of this transition with the paramagnetism/ferromagnetism transition. This fact can only be explained by explicitly ignoring (renormalizing away) the microscopic differences between the systems. The similarity between the systems is an emergent phenomenon.

I’m guessing you could probably say something similar about cognition in a human brain and cognition in a computer. Microscopically extremely different, but exhibiting similar macroscopic behavior. In this sense, I think we can say intelligence is emergent.

  (Quote)

melior February 9, 2011 at 6:25 pm

Like many folks, perhaps, my first exposure to the emergence/reduction discussion was Hofstadter’s Goedel, Escher, Bach: an Eternal Golden Braid. I remember being amazed and fascinated long ago by the examples in the book, but more than a little disappointed that the only real takeaway seemed to be, “Isn’t that wonderful? It’s an inscrutable mystery!”

What puzzles me about the insistence by some that intelligence *must* be emergent (in the final sense you enumerate, i.e. “irreducible”) is that the reasoning used to draw this conclusion always seem to vaporize under close scrutiny. It’s one thing to say we haven’t been able to establish any verifiable levels of reduction yet, fair enough; it’s obviously a hard problem. It’s also fair to say we may one day learn enough to show that it’s not really a meaningful question (perhaps in the same way that the laws of thermodynamics asymptotically lose their explanatory value when a sample size shrinks so small that the system can no longer usefully be characterized as a statistical aggregate). But on what basis exactly can anyone conclude intelligence *can’t* be reduced yet, even in principle? I still haven’t been able to locate a cogent explanation for this claim, though I’ve read several attempts.

The whole thing smells very much to me like the cdesign proponentsists’ debunked theory of the “irreducible complexity” of bacterial flagella, which supposedly could not possibly be reducible to some unknown-at-the-time subcomponents that were themselves each individually functionally evolutionarily favored, until it was later shown that in fact they could and were, and hey look, here they are right here. In that case, the obvious motivation for the theory was to attempt to pre-justify the desired answer (goddidit). Perhaps there’s something similar going on here.

  (Quote)

Tarun February 9, 2011 at 7:11 pm

Melior,

Here’s an attempt at an argument for why we should expect that intelligence is emergent, basically along the same lines as my comment above. Here’s a plausible assumption: intelligence is multiply realizable by quite different physical systems. Intelligence in humans is implemented by a carbon-based neural network. Intelligence might also be implemented by a silicon-based serial processor. These systems are significantly different at the “microscopic” (neural/transistorial) level.

This multiple realizability itself should lead us to think that no reductive explanation can be entirely satisfactory. I might be able to give a reductive explanation of some particular implementation of intelligence, but this fails to capture an important feature of intelligence, viz. its platform-independence. Plausibly, any explanation of intelligence that encompasses its robustness under different microscopic implementations will not be available at the microscopic level itself. Its going to involve some story about how the microscopic differences between some class of systems becomes irrelevant at a certain scale. This is the kind of story that renormalization group theory tells in statistical mechanics.

  (Quote)

Steven R. February 9, 2011 at 11:15 pm

Luke, if you can go to any Youtube video from popular “Atheist Youtubers” you WILL see people using emergent properties as explanations in and out of themselves. Of course, it’s hard to withstand the stupidity of such comment boxes so I don’t blame you for never exposing yourself to those…”arguments.”

OT though, I think you hit the nail in the head. What Yudkowsky should oppose is the concept of emergence as an explanation and stop-gap of further understanding, but not the use of the word itself to describe how complex systems develop.

Eneasz thanks for that post and example. I was having some trouble grappling just what was faulty with emergent properties and then that book you cited (haha, sounds hilarious!) just illustrated it perfeclty.

  (Quote)

Polymeron February 9, 2011 at 11:46 pm

One other anticipation that emergent intelligence (and consciousness) controls is that most behaviors should be replicable by messing with the parts. For instance, we now know that feelings of religious elation, and experiences of alien abduction, can be triggered by electrically stimulating specific areas of the brain. This makes me wonder what else can be achieved. Also, even if I expect that our intelligence really is an emergent behavior of neuron communication, I don’t think it’s necessarily the only form of intelligence (and indeed my own theories of AI do not go in the direction of neural networks, which I consider second-best).
So clearly if people are using emergence as a curiosity stopper, they’re not using it in the same way I do.

  (Quote)

Zeb February 10, 2011 at 6:25 am

To know what “emergent” means, I’d have to know what “nonemergent” means. It sounds like what Luke would mean by “nonemergent” is exactly what is more commonly meant by “emergent.” It seems that Luke uses “emergent” as a synonym for “reducible” in contrast to dualism, while most people use “emergent” in contrast to “reducible.” Although I am about as far from Yudkowsky’s general views as can be, I have always felt that “emergence,” at least in a/theism debates, is often used as a curiosity stopper. Some people want to have it both ways, where “emergence” protects them from whatever is unpalatable or unbelievable about both reductionism and dualism, and it gets them off the hook of having to further probe the mystery in question.

So I don’t get what “emergence” even can mean except “reduction we don’t want to face,” and I’d like to know why Luke would defend a term that he uses oppositely of most people and that seems to be perfectly covered by “reduction.”

  (Quote)

Polymeron February 10, 2011 at 9:25 am

The more I read about it, the more I get the sense that “emergence” is just a poorly-defined term. It would benefit from explicit framing in discussions, so that we all understand what the referent of the word is.

  (Quote)

Fleisch February 12, 2011 at 9:59 am

@ Polymeron: That was precisely what the EY article linked by Paul Crowley said: If you allow people to use the word “emergence”, they will just get confused. And it has happened again, just in this thread of comments, even though the warning was right there. I think the best case against emergence is that it means too much, not that it means nothing. God is a similar word, but there one could argue (as Dennett does) that with god, it’s intentional.
Either way, I hope for the article “rationalist taboo” to come soon.

  (Quote)

Leave a Comment